=== RUN TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-175611
multinode_test.go:450: (dbg) Run: out/minikube-linux-amd64 start -p multinode-175611-m02 --driver=kvm2
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-175611-m02 --driver=kvm2 : exit status 14 (85.277012ms)
-- stdout --
* [multinode-175611-m02] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15242
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
-- /stdout --
** stderr **
! Profile name 'multinode-175611-m02' is duplicated with machine name 'multinode-175611-m02' in profile 'multinode-175611'
X Exiting due to MK_USAGE: Profile name should be unique
** /stderr **
multinode_test.go:458: (dbg) Run: out/minikube-linux-amd64 start -p multinode-175611-m03 --driver=kvm2
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-175611-m03 --driver=kvm2 : signal: killed (910.60649ms)
-- stdout --
* [multinode-175611-m03] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15242
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the kvm2 driver based on user configuration
* Starting control plane node multinode-175611-m03 in cluster multinode-175611-m03
* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
-- /stdout --
multinode_test.go:460: failed to start profile. args "out/minikube-linux-amd64 start -p multinode-175611-m03 --driver=kvm2 " : signal: killed
multinode_test.go:465: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-175611
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-175611: context deadline exceeded (941ns)
multinode_test.go:470: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-175611-m03
multinode_test.go:470: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p multinode-175611-m03: context deadline exceeded (115ns)
multinode_test.go:472: failed to clean temporary profile. args "out/minikube-linux-amd64 delete -p multinode-175611-m03" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-175611 -n multinode-175611
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-175611 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-175611 logs -n 25: (1.270552516s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| cp | multinode-175611 cp multinode-175611-m02:/home/docker/cp-test.txt | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611-m03:/home/docker/cp-test_multinode-175611-m02_multinode-175611-m03.txt | | | | | |
| ssh | multinode-175611 ssh -n | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-175611 ssh -n multinode-175611-m03 sudo cat | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | /home/docker/cp-test_multinode-175611-m02_multinode-175611-m03.txt | | | | | |
| cp | multinode-175611 cp testdata/cp-test.txt | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-175611 ssh -n | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | /tmp/TestMultiNodeserialCopyFile3470561963/001/cp-test_multinode-175611-m03.txt | | | | | |
| ssh | multinode-175611 ssh -n | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611:/home/docker/cp-test_multinode-175611-m03_multinode-175611.txt | | | | | |
| ssh | multinode-175611 ssh -n | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-175611 ssh -n multinode-175611 sudo cat | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | /home/docker/cp-test_multinode-175611-m03_multinode-175611.txt | | | | | |
| cp | multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611-m02:/home/docker/cp-test_multinode-175611-m03_multinode-175611-m02.txt | | | | | |
| ssh | multinode-175611 ssh -n | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | multinode-175611-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-175611 ssh -n multinode-175611-m02 sudo cat | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | /home/docker/cp-test_multinode-175611-m03_multinode-175611-m02.txt | | | | | |
| node | multinode-175611 node stop m03 | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| node | multinode-175611 node start | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-175611 | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | |
| stop | -p multinode-175611 | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:01 UTC |
| start | -p multinode-175611 | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:01 UTC | 31 Oct 22 18:15 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-175611 | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:15 UTC | |
| node | multinode-175611 node delete | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:15 UTC | 31 Oct 22 18:15 UTC |
| | m03 | | | | | |
| stop | multinode-175611 stop | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:15 UTC | 31 Oct 22 18:15 UTC |
| start | -p multinode-175611 | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:15 UTC | 31 Oct 22 18:26 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| node | list -p multinode-175611 | multinode-175611 | jenkins | v1.27.1 | 31 Oct 22 18:26 UTC | |
| start | -p multinode-175611-m02 | multinode-175611-m02 | jenkins | v1.27.1 | 31 Oct 22 18:26 UTC | |
| | --driver=kvm2 | | | | | |
| start | -p multinode-175611-m03 | multinode-175611-m03 | jenkins | v1.27.1 | 31 Oct 22 18:26 UTC | |
| | --driver=kvm2 | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/10/31 18:26:11
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1031 18:26:11.125694 62231 out.go:296] Setting OutFile to fd 1 ...
I1031 18:26:11.125840 62231 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 18:26:11.125843 62231 out.go:309] Setting ErrFile to fd 2...
I1031 18:26:11.125847 62231 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 18:26:11.125986 62231 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
I1031 18:26:11.126593 62231 out.go:303] Setting JSON to false
I1031 18:26:11.127399 62231 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7723,"bootTime":1667233048,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1031 18:26:11.127499 62231 start.go:126] virtualization: kvm guest
I1031 18:26:11.129591 62231 out.go:177] * [multinode-175611-m03] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1031 18:26:11.131374 62231 notify.go:220] Checking for updates...
I1031 18:26:11.132960 62231 out.go:177] - MINIKUBE_LOCATION=15242
I1031 18:26:11.134329 62231 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1031 18:26:11.135825 62231 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
I1031 18:26:11.137154 62231 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
I1031 18:26:11.138538 62231 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1031 18:26:11.140081 62231 config.go:180] Loaded profile config "multinode-175611": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 18:26:11.140142 62231 driver.go:365] Setting default libvirt URI to qemu:///system
I1031 18:26:11.181711 62231 out.go:177] * Using the kvm2 driver based on user configuration
I1031 18:26:11.182945 62231 start.go:282] selected driver: kvm2
I1031 18:26:11.182960 62231 start.go:808] validating driver "kvm2" against <nil>
I1031 18:26:11.182986 62231 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1031 18:26:11.183258 62231 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 18:26:11.183443 62231 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-42743/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1031 18:26:11.198074 62231 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.27.1
I1031 18:26:11.198131 62231 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1031 18:26:11.198634 62231 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
I1031 18:26:11.198759 62231 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
I1031 18:26:11.198787 62231 cni.go:95] Creating CNI manager for ""
I1031 18:26:11.198801 62231 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1031 18:26:11.198811 62231 start_flags.go:317] config:
{Name:multinode-175611-m03 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-175611-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 18:26:11.198922 62231 iso.go:124] acquiring lock: {Name:mk1b8df3d0e7e7151d07f634c55bc8cb360d70d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 18:26:11.201025 62231 out.go:177] * Starting control plane node multinode-175611-m03 in cluster multinode-175611-m03
I1031 18:26:11.202216 62231 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1031 18:26:11.202251 62231 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I1031 18:26:11.202262 62231 cache.go:57] Caching tarball of preloaded images
I1031 18:26:11.202356 62231 preload.go:174] Found /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1031 18:26:11.202369 62231 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I1031 18:26:11.202465 62231 profile.go:148] Saving config to /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/multinode-175611-m03/config.json ...
I1031 18:26:11.202476 62231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/multinode-175611-m03/config.json: {Name:mka676e20c37fe0993654df25a2a4714bf7b01cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 18:26:11.202622 62231 cache.go:208] Successfully downloaded all kic artifacts
I1031 18:26:11.202636 62231 start.go:364] acquiring machines lock for multinode-175611-m03: {Name:mk15de2cb0eed92cba3648c402e45ec73a1cbfb5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1031 18:26:11.202671 62231 start.go:368] acquired machines lock for "multinode-175611-m03" in 28.255µs
I1031 18:26:11.202699 62231 start.go:93] Provisioning new machine with config: &{Name:multinode-175611-m03 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-175611-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1031 18:26:11.202761 62231 start.go:125] createHost starting for "" (driver="kvm2")
*
* ==> Docker <==
* -- Journal begins at Mon 2022-10-31 18:16:06 UTC, ends at Mon 2022-10-31 18:26:12 UTC. --
Oct 31 18:16:33 multinode-175611 dockerd[844]: time="2022-10-31T18:16:33.569568807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 18:16:33 multinode-175611 dockerd[844]: time="2022-10-31T18:16:33.569643013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 18:16:33 multinode-175611 dockerd[844]: time="2022-10-31T18:16:33.569654813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 18:16:33 multinode-175611 dockerd[844]: time="2022-10-31T18:16:33.569828587Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2d6918e71bf991b6d201a8c88bae87bad4b090fdec69d97e29af6276ef71c233 pid=2027 runtime=io.containerd.runc.v2
Oct 31 18:16:34 multinode-175611 dockerd[844]: time="2022-10-31T18:16:34.255655819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 18:16:34 multinode-175611 dockerd[844]: time="2022-10-31T18:16:34.255706912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 18:16:34 multinode-175611 dockerd[844]: time="2022-10-31T18:16:34.255777301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 18:16:34 multinode-175611 dockerd[844]: time="2022-10-31T18:16:34.256047232Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c500f01efc43d90cb0728771c356291670051403654ad79846920976ea208711 pid=2074 runtime=io.containerd.runc.v2
Oct 31 18:16:36 multinode-175611 dockerd[844]: time="2022-10-31T18:16:36.718381119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 18:16:36 multinode-175611 dockerd[844]: time="2022-10-31T18:16:36.718607357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 18:16:36 multinode-175611 dockerd[844]: time="2022-10-31T18:16:36.718681931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 18:16:36 multinode-175611 dockerd[844]: time="2022-10-31T18:16:36.719172818Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/06f4d64b0c51b7545b0bab9edafc9b81c589cf4cfe40f0979faec11a93a74712 pid=2258 runtime=io.containerd.runc.v2
Oct 31 18:16:47 multinode-175611 dockerd[844]: time="2022-10-31T18:16:47.450236906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 18:16:47 multinode-175611 dockerd[844]: time="2022-10-31T18:16:47.450934007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 18:16:47 multinode-175611 dockerd[844]: time="2022-10-31T18:16:47.451113868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 18:16:47 multinode-175611 dockerd[844]: time="2022-10-31T18:16:47.451949653Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f1df0fd45577374129e3bc8d6158ebde90eb4b419f0a71152b9b66b4abb4b6a0 pid=2466 runtime=io.containerd.runc.v2
Oct 31 18:17:04 multinode-175611 dockerd[844]: time="2022-10-31T18:17:04.423161014Z" level=info msg="shim disconnected" id=c500f01efc43d90cb0728771c356291670051403654ad79846920976ea208711
Oct 31 18:17:04 multinode-175611 dockerd[838]: time="2022-10-31T18:17:04.423886476Z" level=info msg="ignoring event" container=c500f01efc43d90cb0728771c356291670051403654ad79846920976ea208711 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 18:17:04 multinode-175611 dockerd[844]: time="2022-10-31T18:17:04.424058881Z" level=warning msg="cleaning up after shim disconnected" id=c500f01efc43d90cb0728771c356291670051403654ad79846920976ea208711 namespace=moby
Oct 31 18:17:04 multinode-175611 dockerd[844]: time="2022-10-31T18:17:04.424077565Z" level=info msg="cleaning up dead shim"
Oct 31 18:17:04 multinode-175611 dockerd[844]: time="2022-10-31T18:17:04.445656734Z" level=warning msg="cleanup warnings time=\"2022-10-31T18:17:04Z\" level=info msg=\"starting signal loop\" namespace=moby pid=2740 runtime=io.containerd.runc.v2\n"
Oct 31 18:17:19 multinode-175611 dockerd[844]: time="2022-10-31T18:17:19.453169163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 18:17:19 multinode-175611 dockerd[844]: time="2022-10-31T18:17:19.453252233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 18:17:19 multinode-175611 dockerd[844]: time="2022-10-31T18:17:19.453264462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 18:17:19 multinode-175611 dockerd[844]: time="2022-10-31T18:17:19.453939274Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c29420223e2af427d7a1e0b2cd29ca879ed6262518b1877175e1bfdf463be803 pid=2911 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
c29420223e2af 6e38f40d628db 8 minutes ago Running storage-provisioner 3 2d6918e71bf99
f1df0fd455773 beaaf00edd38a 9 minutes ago Running kube-proxy 2 49acb80327f0e
06f4d64b0c51b d6e3e26021b60 9 minutes ago Running kindnet-cni 2 dced044e7ad56
c500f01efc43d 6e38f40d628db 9 minutes ago Exited storage-provisioner 2 2d6918e71bf99
d3789e2545d63 a8a176a5d5d69 9 minutes ago Running etcd 2 90fb7923f89e2
690e0b37aeaeb 6d23ec0e8b87e 9 minutes ago Running kube-scheduler 2 317f0ee11b3ce
741b9d7665bbe 6039992312758 9 minutes ago Running kube-controller-manager 2 e6cab2effd357
71635fe14f2af 0346dbd74bcb9 9 minutes ago Running kube-apiserver 2 8ab17f07cb066
15236358fc30b d6e3e26021b60 24 minutes ago Exited kindnet-cni 1 0bcd7f6da7d4e
493b45ebbbc77 beaaf00edd38a 24 minutes ago Exited kube-proxy 1 793df2f45c039
ed32bb110bbd0 6d23ec0e8b87e 24 minutes ago Exited kube-scheduler 1 7137cfe78d746
be68f465191bc a8a176a5d5d69 24 minutes ago Exited etcd 1 0b7d435ff2606
89bcd7b3aa70d 0346dbd74bcb9 24 minutes ago Exited kube-apiserver 1 671fdf79fe64a
30d1c1171fc7f 6039992312758 24 minutes ago Exited kube-controller-manager 1 ffb10987f27d1
8bc1bbb6d09a2 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 27 minutes ago Exited busybox 0 efb2f0b39793a
67e65275be7a0 5185b96f0becf 28 minutes ago Exited coredns 0 ea5ed99abc59c
*
* ==> coredns [67e65275be7a] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: multinode-175611
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-175611
kubernetes.io/os=linux
minikube.k8s.io/commit=c34ec3182cacd96a3e168acffe335374d66b10cc
minikube.k8s.io/name=multinode-175611
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_10_31T17_57_06_0700
minikube.k8s.io/version=v1.27.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 31 Oct 2022 17:57:02 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-175611
AcquireTime: <unset>
RenewTime: Mon, 31 Oct 2022 18:26:12 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 31 Oct 2022 18:22:28 +0000 Mon, 31 Oct 2022 17:56:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 31 Oct 2022 18:22:28 +0000 Mon, 31 Oct 2022 17:56:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 31 Oct 2022 18:22:28 +0000 Mon, 31 Oct 2022 17:56:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 31 Oct 2022 18:22:28 +0000 Mon, 31 Oct 2022 18:17:22 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.114
Hostname: multinode-175611
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 54d005fa00074fba89f5cb22ed71372c
System UUID: 54d005fa-0007-4fba-89f5-cb22ed71372c
Boot ID: 94ea7f4f-f699-430a-a63f-98f30f5d0f71
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-65db55d5d6-m9bbn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system coredns-565d847f94-vwsgh 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 28m
kube-system etcd-multinode-175611 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kindnet-89x2z 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 28m
kube-system kube-apiserver-multinode-175611 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-controller-manager-multinode-175611 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-proxy-tktj7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
kube-system kube-scheduler-multinode-175611 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 28m kube-proxy
Normal Starting 9m25s kube-proxy
Normal Starting 24m kube-proxy
Normal NodeHasNoDiskPressure 29m kubelet Node multinode-175611 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 29m kubelet Node multinode-175611 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 29m kubelet Node multinode-175611 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods
Normal Starting 29m kubelet Starting kubelet.
Normal RegisteredNode 28m node-controller Node multinode-175611 event: Registered Node multinode-175611 in Controller
Normal NodeReady 28m kubelet Node multinode-175611 status is now: NodeReady
Normal NodeHasNoDiskPressure 24m (x8 over 24m) kubelet Node multinode-175611 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 24m (x8 over 24m) kubelet Node multinode-175611 status is now: NodeHasSufficientMemory
Normal Starting 24m kubelet Starting kubelet.
Normal NodeHasSufficientPID 24m (x7 over 24m) kubelet Node multinode-175611 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 24m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 24m node-controller Node multinode-175611 event: Registered Node multinode-175611 in Controller
Normal Starting 9m48s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9m48s (x8 over 9m48s) kubelet Node multinode-175611 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m48s (x8 over 9m48s) kubelet Node multinode-175611 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m48s (x7 over 9m48s) kubelet Node multinode-175611 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9m48s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 9m28s node-controller Node multinode-175611 event: Registered Node multinode-175611 in Controller
Name: multinode-175611-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-175611-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 31 Oct 2022 18:21:58 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-175611-m02
AcquireTime: <unset>
RenewTime: Mon, 31 Oct 2022 18:26:03 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 31 Oct 2022 18:22:08 +0000 Mon, 31 Oct 2022 18:21:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 31 Oct 2022 18:22:08 +0000 Mon, 31 Oct 2022 18:21:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 31 Oct 2022 18:22:08 +0000 Mon, 31 Oct 2022 18:21:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 31 Oct 2022 18:22:08 +0000 Mon, 31 Oct 2022 18:22:08 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.195
Hostname: multinode-175611-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: a62f2376b3a1469c87b0b0be9ac1e409
System UUID: a62f2376-b3a1-469c-87b0-b0be9ac1e409
Boot ID: 47a6a210-f515-4570-81fd-c409aef4db6f
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-65db55d5d6-p6579 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system kindnet-9kfkh 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 27m
kube-system kube-proxy-x6h9n 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 19m kube-proxy
Normal Starting 27m kube-proxy
Normal Starting 4m11s kube-proxy
Normal NodeHasNoDiskPressure 27m (x8 over 27m) kubelet Node multinode-175611-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 27m (x8 over 27m) kubelet Node multinode-175611-m02 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientMemory 19m (x2 over 19m) kubelet Node multinode-175611-m02 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 19m (x2 over 19m) kubelet Node multinode-175611-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 19m (x2 over 19m) kubelet Node multinode-175611-m02 status is now: NodeHasNoDiskPressure
Normal Starting 19m kubelet Starting kubelet.
Normal NodeReady 19m kubelet Node multinode-175611-m02 status is now: NodeReady
Normal Starting 4m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m14s (x2 over 4m14s) kubelet Node multinode-175611-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m14s (x2 over 4m14s) kubelet Node multinode-175611-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m14s (x2 over 4m14s) kubelet Node multinode-175611-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m14s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m4s kubelet Node multinode-175611-m02 status is now: NodeReady
*
* ==> dmesg <==
* [Oct31 18:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.066213] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.827409] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.311524] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.129931] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.359414] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.721937] systemd-fstab-generator[515]: Ignoring "noauto" for root device
[ +0.091788] systemd-fstab-generator[526]: Ignoring "noauto" for root device
[ +1.025114] systemd-fstab-generator[751]: Ignoring "noauto" for root device
[ +0.281790] systemd-fstab-generator[807]: Ignoring "noauto" for root device
[ +0.107080] systemd-fstab-generator[818]: Ignoring "noauto" for root device
[ +0.100460] systemd-fstab-generator[829]: Ignoring "noauto" for root device
[ +1.587888] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
[ +0.102961] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
[ +4.896741] systemd-fstab-generator[1219]: Ignoring "noauto" for root device
[ +0.369088] kauditd_printk_skb: 67 callbacks suppressed
[ +13.396834] kauditd_printk_skb: 8 callbacks suppressed
*
* ==> etcd [be68f465191b] <==
* {"level":"info","ts":"2022-10-31T18:01:34.248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce switched to configuration voters=(9075093065618959310)"}
{"level":"info","ts":"2022-10-31T18:01:34.252Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","added-peer-id":"7df1350fafd42bce","added-peer-peer-urls":["https://192.168.39.114:2380"]}
{"level":"info","ts":"2022-10-31T18:01:34.252Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T18:01:34.255Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T18:01:34.283Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-10-31T18:01:34.284Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.114:2380"}
{"level":"info","ts":"2022-10-31T18:01:34.284Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.114:2380"}
{"level":"info","ts":"2022-10-31T18:01:34.284Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7df1350fafd42bce","initial-advertise-peer-urls":["https://192.168.39.114:2380"],"listen-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-10-31T18:01:34.284Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce is starting a new election at term 2"}
{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became pre-candidate at term 2"}
{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgPreVoteResp from 7df1350fafd42bce at term 2"}
{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became candidate at term 3"}
{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgVoteResp from 7df1350fafd42bce at term 3"}
{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became leader at term 3"}
{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7df1350fafd42bce elected leader 7df1350fafd42bce at term 3"}
{"level":"info","ts":"2022-10-31T18:01:35.586Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7df1350fafd42bce","local-member-attributes":"{Name:multinode-175611 ClientURLs:[https://192.168.39.114:2379]}","request-path":"/0/members/7df1350fafd42bce/attributes","cluster-id":"101f5850ef417740","publish-timeout":"7s"}
{"level":"info","ts":"2022-10-31T18:01:35.586Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T18:01:35.588Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-10-31T18:01:35.588Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T18:01:35.590Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.114:2379"}
{"level":"info","ts":"2022-10-31T18:01:35.596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-10-31T18:01:35.596Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-10-31T18:11:35.618Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1140}
{"level":"info","ts":"2022-10-31T18:11:35.639Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1140,"took":"20.643042ms"}
*
* ==> etcd [d3789e2545d6] <==
* {"level":"info","ts":"2022-10-31T18:16:27.883Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"7df1350fafd42bce","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-10-31T18:16:27.884Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-10-31T18:16:27.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce switched to configuration voters=(9075093065618959310)"}
{"level":"info","ts":"2022-10-31T18:16:27.925Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","added-peer-id":"7df1350fafd42bce","added-peer-peer-urls":["https://192.168.39.114:2380"]}
{"level":"info","ts":"2022-10-31T18:16:27.928Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T18:16:27.928Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T18:16:27.949Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-10-31T18:16:27.950Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7df1350fafd42bce","initial-advertise-peer-urls":["https://192.168.39.114:2380"],"listen-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-10-31T18:16:27.950Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-10-31T18:16:27.953Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.114:2380"}
{"level":"info","ts":"2022-10-31T18:16:27.953Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.114:2380"}
{"level":"info","ts":"2022-10-31T18:16:29.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce is starting a new election at term 3"}
{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became pre-candidate at term 3"}
{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgPreVoteResp from 7df1350fafd42bce at term 3"}
{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became candidate at term 4"}
{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgVoteResp from 7df1350fafd42bce at term 4"}
{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became leader at term 4"}
{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7df1350fafd42bce elected leader 7df1350fafd42bce at term 4"}
{"level":"info","ts":"2022-10-31T18:16:29.032Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7df1350fafd42bce","local-member-attributes":"{Name:multinode-175611 ClientURLs:[https://192.168.39.114:2379]}","request-path":"/0/members/7df1350fafd42bce/attributes","cluster-id":"101f5850ef417740","publish-timeout":"7s"}
{"level":"info","ts":"2022-10-31T18:16:29.032Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T18:16:29.033Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T18:16:29.034Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.114:2379"}
{"level":"info","ts":"2022-10-31T18:16:29.034Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-10-31T18:16:29.034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-10-31T18:16:29.034Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 18:26:13 up 10 min, 0 users, load average: 0.72, 0.30, 0.14
Linux multinode-175611 5.10.57 #1 SMP Wed Oct 19 23:03:20 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [71635fe14f2a] <==
* I1031 18:16:31.228185 1 establishing_controller.go:76] Starting EstablishingController
I1031 18:16:31.228360 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1031 18:16:31.228440 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1031 18:16:31.228535 1 crd_finalizer.go:266] Starting CRDFinalizer
I1031 18:16:31.264117 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1031 18:16:31.274608 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1031 18:16:31.275930 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1031 18:16:31.276089 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
E1031 18:16:31.326518 1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1031 18:16:31.361161 1 shared_informer.go:262] Caches are synced for node_authorizer
I1031 18:16:31.376541 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1031 18:16:31.408132 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I1031 18:16:31.419307 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1031 18:16:31.421238 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1031 18:16:31.422782 1 cache.go:39] Caches are synced for autoregister controller
I1031 18:16:31.424171 1 apf_controller.go:305] Running API Priority and Fairness config worker
I1031 18:16:31.424530 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1031 18:16:31.963029 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1031 18:16:32.222238 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1031 18:16:33.833921 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1031 18:16:33.979627 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I1031 18:16:33.993941 1 controller.go:616] quota admission added evaluator for: deployments.apps
I1031 18:16:34.055639 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1031 18:16:34.062965 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1031 18:17:36.952169 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [89bcd7b3aa70] <==
* I1031 18:01:37.727304 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1031 18:01:37.727317 1 crd_finalizer.go:266] Starting CRDFinalizer
I1031 18:01:37.730574 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1031 18:01:37.732562 1 controller.go:80] Starting OpenAPI V3 AggregationController
I1031 18:01:37.732743 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1031 18:01:37.733624 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1031 18:01:37.761616 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I1031 18:01:37.844108 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1031 18:01:37.846638 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1031 18:01:37.849496 1 cache.go:39] Caches are synced for autoregister controller
E1031 18:01:37.851222 1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1031 18:01:37.852188 1 shared_informer.go:262] Caches are synced for node_authorizer
I1031 18:01:37.864521 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1031 18:01:37.877930 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1031 18:01:37.878359 1 apf_controller.go:305] Running API Priority and Fairness config worker
I1031 18:01:37.894700 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I1031 18:01:38.470876 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1031 18:01:38.732300 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1031 18:01:40.811411 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1031 18:01:40.919924 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I1031 18:01:40.930234 1 controller.go:616] quota admission added evaluator for: deployments.apps
I1031 18:01:40.989709 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1031 18:01:40.996301 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1031 18:01:50.866727 1 controller.go:616] quota admission added evaluator for: endpoints
I1031 18:01:50.894896 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [30d1c1171fc7] <==
* I1031 18:01:51.324385 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 18:01:51.324429 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1031 18:01:51.329724 1 shared_informer.go:262] Caches are synced for garbage collector
W1031 18:02:30.992058 1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
I1031 18:02:30.993896 1 event.go:294] "Event occurred" object="multinode-175611-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-175611-m03 status is now: NodeNotReady"
I1031 18:02:31.004158 1 event.go:294] "Event occurred" object="kube-system/kindnet-svfcl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 18:02:31.013439 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-4xkjz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 18:02:31.023916 1 event.go:294] "Event occurred" object="multinode-175611-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-175611-m02 status is now: NodeNotReady"
I1031 18:02:31.037642 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-x6h9n" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 18:02:31.051761 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 18:02:31.059691 1 event.go:294] "Event occurred" object="kube-system/kindnet-9kfkh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 18:06:23.738585 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-7ch9q"
W1031 18:06:27.730619 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-175611-m02" does not exist
I1031 18:06:27.732826 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-p6579"
I1031 18:06:27.744306 1 range_allocator.go:367] Set node multinode-175611-m02 PodCIDR to [10.244.1.0/24]
W1031 18:06:38.164414 1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
I1031 18:06:41.110286 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-p6579"
W1031 18:11:03.119068 1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
W1031 18:11:03.937259 1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
W1031 18:11:03.938173 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-175611-m03" does not exist
I1031 18:11:03.947759 1 range_allocator.go:367] Set node multinode-175611-m03 PodCIDR to [10.244.2.0/24]
W1031 18:11:44.982759 1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m03 node
I1031 18:11:46.168801 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-7ch9q" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-7ch9q"
I1031 18:15:47.691901 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-hs5pp"
W1031 18:15:49.696196 1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
*
* ==> kube-controller-manager [741b9d7665bb] <==
* I1031 18:16:44.252381 1 shared_informer.go:262] Caches are synced for persistent volume
I1031 18:16:44.267417 1 shared_informer.go:262] Caches are synced for endpoint_slice
I1031 18:16:44.331801 1 shared_informer.go:262] Caches are synced for resource quota
I1031 18:16:44.354142 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I1031 18:16:44.377924 1 shared_informer.go:262] Caches are synced for HPA
I1031 18:16:44.383901 1 shared_informer.go:262] Caches are synced for resource quota
I1031 18:16:44.771908 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 18:16:44.811025 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 18:16:44.811068 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1031 18:17:22.616402 1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
I1031 18:17:24.210086 1 event.go:294] "Event occurred" object="multinode-175611-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-175611-m02 status is now: NodeNotReady"
I1031 18:17:24.229198 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-x6h9n" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 18:17:24.246574 1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-svfcl"
I1031 18:17:24.259845 1 event.go:294] "Event occurred" object="kube-system/kindnet-9kfkh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 18:17:24.284179 1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-svfcl"
I1031 18:17:24.284196 1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-4xkjz"
I1031 18:17:24.293680 1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94-vwsgh" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-565d847f94-vwsgh"
I1031 18:17:24.293849 1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
I1031 18:17:24.293870 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-m9bbn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-m9bbn"
I1031 18:17:24.309642 1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-4xkjz"
I1031 18:21:58.486818 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-p6579"
W1031 18:21:58.487019 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-175611-m02" does not exist
I1031 18:21:58.501070 1 range_allocator.go:367] Set node multinode-175611-m02 PodCIDR to [10.244.1.0/24]
W1031 18:22:08.581567 1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
I1031 18:22:09.342543 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-p6579"
*
* ==> kube-proxy [493b45ebbbc7] <==
* I1031 18:01:39.524498 1 node.go:163] Successfully retrieved node IP: 192.168.39.114
I1031 18:01:39.524684 1 server_others.go:138] "Detected node IP" address="192.168.39.114"
I1031 18:01:39.524782 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1031 18:01:39.597571 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I1031 18:01:39.597685 1 server_others.go:206] "Using iptables Proxier"
I1031 18:01:39.598344 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 18:01:39.602463 1 server.go:661] "Version info" version="v1.25.3"
I1031 18:01:39.602701 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 18:01:39.608895 1 config.go:317] "Starting service config controller"
I1031 18:01:39.610219 1 shared_informer.go:255] Waiting for caches to sync for service config
I1031 18:01:39.610329 1 config.go:226] "Starting endpoint slice config controller"
I1031 18:01:39.610410 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1031 18:01:39.613608 1 config.go:444] "Starting node config controller"
I1031 18:01:39.613754 1 shared_informer.go:255] Waiting for caches to sync for node config
I1031 18:01:39.711753 1 shared_informer.go:262] Caches are synced for endpoint slice config
I1031 18:01:39.711839 1 shared_informer.go:262] Caches are synced for service config
I1031 18:01:39.715047 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [f1df0fd45577] <==
* I1031 18:16:47.641529 1 node.go:163] Successfully retrieved node IP: 192.168.39.114
I1031 18:16:47.641615 1 server_others.go:138] "Detected node IP" address="192.168.39.114"
I1031 18:16:47.641636 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1031 18:16:47.672052 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I1031 18:16:47.672088 1 server_others.go:206] "Using iptables Proxier"
I1031 18:16:47.673082 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 18:16:47.673617 1 server.go:661] "Version info" version="v1.25.3"
I1031 18:16:47.673652 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 18:16:47.676159 1 config.go:317] "Starting service config controller"
I1031 18:16:47.676199 1 shared_informer.go:255] Waiting for caches to sync for service config
I1031 18:16:47.676995 1 config.go:226] "Starting endpoint slice config controller"
I1031 18:16:47.677032 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1031 18:16:47.684020 1 config.go:444] "Starting node config controller"
I1031 18:16:47.684053 1 shared_informer.go:255] Waiting for caches to sync for node config
I1031 18:16:47.777435 1 shared_informer.go:262] Caches are synced for endpoint slice config
I1031 18:16:47.777540 1 shared_informer.go:262] Caches are synced for service config
I1031 18:16:47.784131 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [690e0b37aeae] <==
* I1031 18:16:28.638417 1 serving.go:348] Generated self-signed cert in-memory
W1031 18:16:31.299900 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1031 18:16:31.300178 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1031 18:16:31.300220 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1031 18:16:31.300379 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1031 18:16:31.331956 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1031 18:16:31.331995 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 18:16:31.339598 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1031 18:16:31.341151 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1031 18:16:31.341632 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1031 18:16:31.341781 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1031 18:16:31.443155 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [ed32bb110bbd] <==
* I1031 18:01:34.529605 1 serving.go:348] Generated self-signed cert in-memory
W1031 18:01:37.777963 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1031 18:01:37.778458 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1031 18:01:37.778689 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1031 18:01:37.778718 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1031 18:01:37.818378 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1031 18:01:37.818419 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 18:01:37.827777 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1031 18:01:37.834438 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1031 18:01:37.834480 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1031 18:01:37.834750 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1031 18:01:37.936126 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Mon 2022-10-31 18:16:06 UTC, ends at Mon 2022-10-31 18:26:13 UTC. --
Oct 31 18:25:31 multinode-175611 kubelet[1225]: E1031 18:25:31.378587 1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-m9bbn" podUID=31aead9d-cbbe-45a7-9552-aa7dc7128d67
Oct 31 18:25:41 multinode-175611 kubelet[1225]: E1031 18:25:41.377986 1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-vwsgh_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b"
Oct 31 18:25:41 multinode-175611 kubelet[1225]: E1031 18:25:41.378284 1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b}
Oct 31 18:25:41 multinode-175611 kubelet[1225]: E1031 18:25:41.378350 1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 18:25:41 multinode-175611 kubelet[1225]: E1031 18:25:41.378437 1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-vwsgh" podUID=43c956b8-aa61-43e5-b432-f59ccdffde38
Oct 31 18:25:42 multinode-175611 kubelet[1225]: E1031 18:25:42.378571 1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-m9bbn_default\" network: could not retrieve port mappings: key is not found" podSandboxID="efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87"
Oct 31 18:25:42 multinode-175611 kubelet[1225]: E1031 18:25:42.378607 1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87}
Oct 31 18:25:42 multinode-175611 kubelet[1225]: E1031 18:25:42.378634 1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 18:25:42 multinode-175611 kubelet[1225]: E1031 18:25:42.378656 1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-m9bbn" podUID=31aead9d-cbbe-45a7-9552-aa7dc7128d67
Oct 31 18:25:52 multinode-175611 kubelet[1225]: E1031 18:25:52.378905 1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-vwsgh_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b"
Oct 31 18:25:52 multinode-175611 kubelet[1225]: E1031 18:25:52.378972 1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b}
Oct 31 18:25:52 multinode-175611 kubelet[1225]: E1031 18:25:52.379006 1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 18:25:52 multinode-175611 kubelet[1225]: E1031 18:25:52.379027 1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-vwsgh" podUID=43c956b8-aa61-43e5-b432-f59ccdffde38
Oct 31 18:25:56 multinode-175611 kubelet[1225]: E1031 18:25:56.377162 1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-m9bbn_default\" network: could not retrieve port mappings: key is not found" podSandboxID="efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87"
Oct 31 18:25:56 multinode-175611 kubelet[1225]: E1031 18:25:56.377502 1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87}
Oct 31 18:25:56 multinode-175611 kubelet[1225]: E1031 18:25:56.377577 1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 18:25:56 multinode-175611 kubelet[1225]: E1031 18:25:56.377632 1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-m9bbn" podUID=31aead9d-cbbe-45a7-9552-aa7dc7128d67
Oct 31 18:26:07 multinode-175611 kubelet[1225]: E1031 18:26:07.378267 1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-vwsgh_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b"
Oct 31 18:26:07 multinode-175611 kubelet[1225]: E1031 18:26:07.378325 1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b}
Oct 31 18:26:07 multinode-175611 kubelet[1225]: E1031 18:26:07.378360 1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 18:26:07 multinode-175611 kubelet[1225]: E1031 18:26:07.378385 1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-vwsgh" podUID=43c956b8-aa61-43e5-b432-f59ccdffde38
Oct 31 18:26:10 multinode-175611 kubelet[1225]: E1031 18:26:10.379782 1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-m9bbn_default\" network: could not retrieve port mappings: key is not found" podSandboxID="efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87"
Oct 31 18:26:10 multinode-175611 kubelet[1225]: E1031 18:26:10.379858 1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87}
Oct 31 18:26:10 multinode-175611 kubelet[1225]: E1031 18:26:10.379897 1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 18:26:10 multinode-175611 kubelet[1225]: E1031 18:26:10.379920 1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-m9bbn" podUID=31aead9d-cbbe-45a7-9552-aa7dc7128d67
*
* ==> storage-provisioner [c29420223e2a] <==
* I1031 18:17:19.534824 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1031 18:17:19.557432 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1031 18:17:19.557824 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1031 18:17:36.954508 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1031 18:17:36.970840 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-175611_e0a6ea65-758b-4cc3-8b06-820aaeda49ab!
I1031 18:17:36.991922 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dea15414-5b61-4984-9310-a6530f2c62a2", APIVersion:"v1", ResourceVersion:"1917", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-175611_e0a6ea65-758b-4cc3-8b06-820aaeda49ab became leader
I1031 18:17:37.110055 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-175611_e0a6ea65-758b-4cc3-8b06-820aaeda49ab!
*
* ==> storage-provisioner [c500f01efc43] <==
* I1031 18:16:34.376044 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1031 18:17:04.399354 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-175611 -n multinode-175611
helpers_test.go:261: (dbg) Run: kubectl --context multinode-175611 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-hs5pp
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context multinode-175611 describe pod busybox-65db55d5d6-hs5pp
helpers_test.go:280: (dbg) kubectl --context multinode-175611 describe pod busybox-65db55d5d6-hs5pp:
-- stdout --
Name: busybox-65db55d5d6-hs5pp
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: app=busybox
pod-template-hash=65db55d5d6
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/busybox-65db55d5d6
Containers:
busybox:
Image: gcr.io/k8s-minikube/busybox:1.28
Port: <none>
Host Port: <none>
Command:
sleep
3600
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkqsf (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-qkqsf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 8m47s (x2 over 8m49s) default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 4m5s (x2 over 9m42s) default-scheduler 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (3.09s)