=== RUN TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-20220629181057-857010
multinode_test.go:450: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220629181057-857010-m02 --driver=kvm2
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220629181057-857010-m02 --driver=kvm2 : exit status 14 (87.899289ms)
-- stdout --
* [multinode-20220629181057-857010-m02] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14420
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
-- /stdout --
** stderr **
! Profile name 'multinode-20220629181057-857010-m02' is duplicated with machine name 'multinode-20220629181057-857010-m02' in profile 'multinode-20220629181057-857010'
X Exiting due to MK_USAGE: Profile name should be unique
** /stderr **
multinode_test.go:458: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220629181057-857010-m03 --driver=kvm2
E0629 18:40:56.633971 857010 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175330-857010/client.crt: no such file or directory
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220629181057-857010-m03 --driver=kvm2 : signal: killed (3.874640742s)
-- stdout --
* [multinode-20220629181057-857010-m03] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14420
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the kvm2 driver based on user configuration
* Starting control plane node multinode-20220629181057-857010-m03 in cluster multinode-20220629181057-857010-m03
* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
-- /stdout --
multinode_test.go:460: failed to start profile. args "out/minikube-linux-amd64 start -p multinode-20220629181057-857010-m03 --driver=kvm2 " : signal: killed
multinode_test.go:465: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-20220629181057-857010
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220629181057-857010: context deadline exceeded (2.876µs)
multinode_test.go:470: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-20220629181057-857010-m03
multinode_test.go:470: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p multinode-20220629181057-857010-m03: context deadline exceeded (528ns)
multinode_test.go:472: failed to clean temporary profile. args "out/minikube-linux-amd64 delete -p multinode-20220629181057-857010-m03" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20220629181057-857010 -n multinode-20220629181057-857010
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220629181057-857010 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220629181057-857010 logs -n 25: (1.448966651s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs:
-- stdout --
*
* ==> Audit <==
* |---------|--------------------------------------------------------------------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------------------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
| cp | multinode-20220629181057-857010 cp multinode-20220629181057-857010-m02:/home/docker/cp-test.txt | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | multinode-20220629181057-857010-m03:/home/docker/cp-test_multinode-20220629181057-857010-m02_multinode-20220629181057-857010-m03.txt | | | | | |
| ssh | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | ssh -n | | | | | |
| | multinode-20220629181057-857010-m02 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| ssh | multinode-20220629181057-857010 ssh -n multinode-20220629181057-857010-m03 sudo cat | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | /home/docker/cp-test_multinode-20220629181057-857010-m02_multinode-20220629181057-857010-m03.txt | | | | | |
| cp | multinode-20220629181057-857010 cp testdata/cp-test.txt | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | multinode-20220629181057-857010-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | ssh -n | | | | | |
| | multinode-20220629181057-857010-m03 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| cp | multinode-20220629181057-857010 cp | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | multinode-20220629181057-857010-m03:/home/docker/cp-test.txt | | | | | |
| | /tmp/TestMultiNodeserialCopyFile3395794004/001/cp-test_multinode-20220629181057-857010-m03.txt | | | | | |
| ssh | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | ssh -n | | | | | |
| | multinode-20220629181057-857010-m03 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| cp | multinode-20220629181057-857010 cp multinode-20220629181057-857010-m03:/home/docker/cp-test.txt | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | multinode-20220629181057-857010:/home/docker/cp-test_multinode-20220629181057-857010-m03_multinode-20220629181057-857010.txt | | | | | |
| ssh | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | ssh -n | | | | | |
| | multinode-20220629181057-857010-m03 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| ssh | multinode-20220629181057-857010 ssh -n multinode-20220629181057-857010 sudo cat | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | /home/docker/cp-test_multinode-20220629181057-857010-m03_multinode-20220629181057-857010.txt | | | | | |
| cp | multinode-20220629181057-857010 cp multinode-20220629181057-857010-m03:/home/docker/cp-test.txt | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | multinode-20220629181057-857010-m02:/home/docker/cp-test_multinode-20220629181057-857010-m03_multinode-20220629181057-857010-m02.txt | | | | | |
| ssh | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | ssh -n | | | | | |
| | multinode-20220629181057-857010-m03 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| ssh | multinode-20220629181057-857010 ssh -n multinode-20220629181057-857010-m02 sudo cat | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | /home/docker/cp-test_multinode-20220629181057-857010-m03_multinode-20220629181057-857010-m02.txt | | | | | |
| node | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | node stop m03 | | | | | |
| node | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:15 UTC |
| | node start m03 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | |
| | multinode-20220629181057-857010 | | | | | |
| stop | -p | minikube | jenkins | v1.26.0 | 29 Jun 22 18:15 UTC | 29 Jun 22 18:16 UTC |
| | multinode-20220629181057-857010 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 29 Jun 22 18:16 UTC | 29 Jun 22 18:30 UTC |
| | multinode-20220629181057-857010 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p | minikube | jenkins | v1.26.0 | 29 Jun 22 18:30 UTC | |
| | multinode-20220629181057-857010 | | | | | |
| node | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:30 UTC | 29 Jun 22 18:30 UTC |
| | node delete m03 | | | | | |
| stop | multinode-20220629181057-857010 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:30 UTC | 29 Jun 22 18:30 UTC |
| | stop | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 29 Jun 22 18:30 UTC | 29 Jun 22 18:40 UTC |
| | multinode-20220629181057-857010 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr --driver=kvm2 | | | | | |
| | | | | | | |
| node | list -p | minikube | jenkins | v1.26.0 | 29 Jun 22 18:40 UTC | |
| | multinode-20220629181057-857010 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 29 Jun 22 18:40 UTC | |
| | multinode-20220629181057-857010-m02 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 29 Jun 22 18:40 UTC | |
| | multinode-20220629181057-857010-m03 | | | | | |
| | --driver=kvm2 | | | | | |
|---------|--------------------------------------------------------------------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/06/29 18:40:53
Running on machine: ubuntu-20-agent
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0629 18:40:53.875974 869182 out.go:296] Setting OutFile to fd 1 ...
I0629 18:40:53.876099 869182 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0629 18:40:53.876103 869182 out.go:309] Setting ErrFile to fd 2...
I0629 18:40:53.876107 869182 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0629 18:40:53.876552 869182 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
I0629 18:40:53.876841 869182 out.go:303] Setting JSON to false
I0629 18:40:53.877698 869182 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent","uptime":156197,"bootTime":1656371857,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0629 18:40:53.877755 869182 start.go:125] virtualization: kvm guest
I0629 18:40:53.880337 869182 out.go:177] * [multinode-20220629181057-857010-m03] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0629 18:40:53.881985 869182 out.go:177] - MINIKUBE_LOCATION=14420
I0629 18:40:53.881943 869182 notify.go:193] Checking for updates...
I0629 18:40:53.883639 869182 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0629 18:40:53.885261 869182 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
I0629 18:40:53.886964 869182 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
I0629 18:40:53.888523 869182 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0629 18:40:53.890435 869182 config.go:178] Loaded profile config "multinode-20220629181057-857010": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0629 18:40:53.890502 869182 driver.go:360] Setting default libvirt URI to qemu:///system
I0629 18:40:53.927517 869182 out.go:177] * Using the kvm2 driver based on user configuration
I0629 18:40:53.928962 869182 start.go:284] selected driver: kvm2
I0629 18:40:53.928973 869182 start.go:808] validating driver "kvm2" against <nil>
I0629 18:40:53.928992 869182 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0629 18:40:53.929267 869182 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0629 18:40:53.929541 869182 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0629 18:40:53.945520 869182 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.26.0
I0629 18:40:53.945586 869182 start_flags.go:296] no existing cluster config was found, will generate one from the flags
I0629 18:40:53.946067 869182 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32103MB, container=0MB
I0629 18:40:53.946172 869182 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
I0629 18:40:53.946189 869182 cni.go:95] Creating CNI manager for ""
I0629 18:40:53.946194 869182 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0629 18:40:53.946201 869182 start_flags.go:310] config:
{Name:multinode-20220629181057-857010-m03 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629181057-857010-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0629 18:40:53.946310 869182 iso.go:128] acquiring lock: {Name:mk6f8229a5d3a4dfd9d7a57d324167158de4dbaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0629 18:40:53.948872 869182 out.go:177] * Starting control plane node multinode-20220629181057-857010-m03 in cluster multinode-20220629181057-857010-m03
I0629 18:40:53.950325 869182 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0629 18:40:53.950366 869182 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
I0629 18:40:53.950376 869182 cache.go:57] Caching tarball of preloaded images
I0629 18:40:53.950508 869182 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0629 18:40:53.950525 869182 cache.go:60] Finished verifying existence of preloaded tar for v1.24.2 on docker
I0629 18:40:53.950680 869182 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/multinode-20220629181057-857010-m03/config.json ...
I0629 18:40:53.950710 869182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2--14420-850092-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/multinode-20220629181057-857010-m03/config.json: {Name:mk4f4e47781146d37d43986289d314fbc771694a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0629 18:40:53.950907 869182 cache.go:208] Successfully downloaded all kic artifacts
I0629 18:40:53.950939 869182 start.go:352] acquiring machines lock for multinode-20220629181057-857010-m03: {Name:mkd8c57a3b84afff0448f8fafd173296dac9e22b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0629 18:40:53.950997 869182 start.go:356] acquired machines lock for "multinode-20220629181057-857010-m03" in 43.839µs
I0629 18:40:53.951012 869182 start.go:91] Provisioning new machine with config: &{Name:multinode-20220629181057-857010-m03 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14420/minikube-v1.26.0-1656448385-14420-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPo
rt:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629181057-857010-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0629 18:40:53.951107 869182 start.go:131] createHost starting for "" (driver="kvm2")
*
* ==> Docker <==
* -- Journal begins at Wed 2022-06-29 18:31:09 UTC, ends at Wed 2022-06-29 18:40:58 UTC. --
Jun 29 18:31:35 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:35.702446704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 29 18:31:35 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:35.702691311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 29 18:31:35 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:35.702824681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 29 18:31:35 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:35.703216583Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b8896c1bcbddbae66eb69253ec7180870e25a1122138de528425589dd5191952 pid=1997 runtime=io.containerd.runc.v2
Jun 29 18:31:36 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:36.071317727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 29 18:31:36 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:36.071555295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 29 18:31:36 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:36.071568179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 29 18:31:36 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:36.073298802Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/96b2c380b5cbc46100ab4006fafc78114fd7193a97142875933edfcc36bbfba5 pid=2042 runtime=io.containerd.runc.v2
Jun 29 18:31:36 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:36.334426178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 29 18:31:36 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:36.334647441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 29 18:31:36 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:36.334659871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 29 18:31:36 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:36.337015929Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cb414583dee05f1ae290ad7461df7129f9865b1be12e65dfa7fcd6c525ae37ce pid=2098 runtime=io.containerd.runc.v2
Jun 29 18:31:39 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:39.125426780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 29 18:31:39 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:39.126134038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 29 18:31:39 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:39.126291013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 29 18:31:39 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:31:39.127166112Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9ddc1d6659dc48a4aa5aa6f7075b964b7d7bfa28d45ada5430ad15eee55d9af2 pid=2355 runtime=io.containerd.runc.v2
Jun 29 18:32:06 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:32:06.677369265Z" level=info msg="shim disconnected" id=cb414583dee05f1ae290ad7461df7129f9865b1be12e65dfa7fcd6c525ae37ce
Jun 29 18:32:06 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:32:06.677834611Z" level=warning msg="cleaning up after shim disconnected" id=cb414583dee05f1ae290ad7461df7129f9865b1be12e65dfa7fcd6c525ae37ce namespace=moby
Jun 29 18:32:06 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:32:06.677856836Z" level=info msg="cleaning up dead shim"
Jun 29 18:32:06 multinode-20220629181057-857010 dockerd[835]: time="2022-06-29T18:32:06.679712702Z" level=info msg="ignoring event" container=cb414583dee05f1ae290ad7461df7129f9865b1be12e65dfa7fcd6c525ae37ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 29 18:32:06 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:32:06.699158637Z" level=warning msg="cleanup warnings time=\"2022-06-29T18:32:06Z\" level=info msg=\"starting signal loop\" namespace=moby pid=2758 runtime=io.containerd.runc.v2\n"
Jun 29 18:32:20 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:32:20.657222950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 29 18:32:20 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:32:20.657711069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 29 18:32:20 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:32:20.657857923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 29 18:32:20 multinode-20220629181057-857010 dockerd[841]: time="2022-06-29T18:32:20.658711324Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2746239a50f6b5ab039a8c9e3530d75fc8b9dec7fbbdce87fb9a283d6cdf1bf4 pid=2943 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
2746239a50f6b 6e38f40d628db 8 minutes ago Running storage-provisioner 3 fc77bac3863ae
9ddc1d6659dc4 6fb66cd78abfe 9 minutes ago Running kindnet-cni 2 e289dbc6cb4a5
cb414583dee05 6e38f40d628db 9 minutes ago Exited storage-provisioner 2 fc77bac3863ae
96b2c380b5cbc a634548d10b03 9 minutes ago Running kube-proxy 2 b8896c1bcbddb
1ddc144fa803a aebe758cef4cd 9 minutes ago Running etcd 2 54533eab0619d
45c07f081f4ba d3377ffb7177c 9 minutes ago Running kube-apiserver 2 6a6fbba64cc15
b39c8f2928e98 34cdf99b1bb3b 9 minutes ago Running kube-controller-manager 2 6ba2e1357eb32
df5437afad839 5d725196c1f47 9 minutes ago Running kube-scheduler 2 e85c9d79fc6fa
324501acd8980 6fb66cd78abfe 24 minutes ago Exited kindnet-cni 1 d9569dd69299f
8d5231f343cd3 a634548d10b03 24 minutes ago Exited kube-proxy 1 d31269a9b8f01
5f629f78d27bd aebe758cef4cd 24 minutes ago Exited etcd 1 473f90a1a4e89
878241b32dd1a 5d725196c1f47 24 minutes ago Exited kube-scheduler 1 83ceadcb3a5fb
23d45e513e01d d3377ffb7177c 24 minutes ago Exited kube-apiserver 1 1b97e2a3130d8
a57b8a22adde4 34cdf99b1bb3b 24 minutes ago Exited kube-controller-manager 1 d5db3dfc25590
79fba148f922c gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 27 minutes ago Exited busybox 0 2a017a073d650
657577930f0e2 a4ca41631cc7a 28 minutes ago Exited coredns 0 bff2c7259ea15
*
* ==> coredns [657577930f0e] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: multinode-20220629181057-857010
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-20220629181057-857010
kubernetes.io/os=linux
minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
minikube.k8s.io/name=multinode-20220629181057-857010
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_06_29T18_11_54_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 29 Jun 2022 18:11:50 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-20220629181057-857010
AcquireTime: <unset>
RenewTime: Wed, 29 Jun 2022 18:40:55 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 29 Jun 2022 18:37:00 +0000 Wed, 29 Jun 2022 18:11:47 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 29 Jun 2022 18:37:00 +0000 Wed, 29 Jun 2022 18:11:47 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 29 Jun 2022 18:37:00 +0000 Wed, 29 Jun 2022 18:11:47 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 29 Jun 2022 18:37:00 +0000 Wed, 29 Jun 2022 18:31:54 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.130
Hostname: multinode-20220629181057-857010
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165916Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165916Ki
pods: 110
System Info:
Machine ID: 047e2ae122df448abcadc6ec95a7637b
System UUID: 047e2ae1-22df-448a-bcad-c6ec95a7637b
Boot ID: 5e436945-a027-4334-9f6a-3d08387e3398
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.16
Kubelet Version: v1.24.2
Kube-Proxy Version: v1.24.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-d46db594c-945kp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system coredns-6d4b75cb6d-qrvhv 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 28m
kube-system etcd-multinode-20220629181057-857010 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kindnet-jfgv2 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 28m
kube-system kube-apiserver-multinode-20220629181057-857010 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-controller-manager-multinode-20220629181057-857010 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-proxy-mxtc8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
kube-system kube-scheduler-multinode-20220629181057-857010 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 28m kube-proxy
Normal Starting 24m kube-proxy
Normal Starting 9m22s kube-proxy
Normal Starting 29m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 29m (x4 over 29m) kubelet Node multinode-20220629181057-857010 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29m (x3 over 29m) kubelet Node multinode-20220629181057-857010 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29m (x3 over 29m) kubelet Node multinode-20220629181057-857010 status is now: NodeHasSufficientPID
Normal Starting 29m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 29m kubelet Node multinode-20220629181057-857010 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29m kubelet Node multinode-20220629181057-857010 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29m kubelet Node multinode-20220629181057-857010 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 28m node-controller Node multinode-20220629181057-857010 event: Registered Node multinode-20220629181057-857010 in Controller
Normal NodeReady 28m kubelet Node multinode-20220629181057-857010 status is now: NodeReady
Normal NodeAllocatableEnforced 24m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 24m (x8 over 24m) kubelet Node multinode-20220629181057-857010 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24m (x8 over 24m) kubelet Node multinode-20220629181057-857010 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24m (x7 over 24m) kubelet Node multinode-20220629181057-857010 status is now: NodeHasSufficientPID
Normal Starting 24m kubelet Starting kubelet.
Normal RegisteredNode 23m node-controller Node multinode-20220629181057-857010 event: Registered Node multinode-20220629181057-857010 in Controller
Normal Starting 9m32s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9m32s (x8 over 9m32s) kubelet Node multinode-20220629181057-857010 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m32s (x8 over 9m32s) kubelet Node multinode-20220629181057-857010 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m32s (x7 over 9m32s) kubelet Node multinode-20220629181057-857010 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9m32s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 9m12s node-controller Node multinode-20220629181057-857010 event: Registered Node multinode-20220629181057-857010 in Controller
Name: multinode-20220629181057-857010-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-20220629181057-857010-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 29 Jun 2022 18:36:31 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-20220629181057-857010-m02
AcquireTime: <unset>
RenewTime: Wed, 29 Jun 2022 18:40:56 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 29 Jun 2022 18:36:51 +0000 Wed, 29 Jun 2022 18:36:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 29 Jun 2022 18:36:51 +0000 Wed, 29 Jun 2022 18:36:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 29 Jun 2022 18:36:51 +0000 Wed, 29 Jun 2022 18:36:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 29 Jun 2022 18:36:51 +0000 Wed, 29 Jun 2022 18:36:51 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.116
Hostname: multinode-20220629181057-857010-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165916Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165916Ki
pods: 110
System Info:
Machine ID: c8d89dfe7b9d423d864e79dc66d80bd4
System UUID: c8d89dfe-7b9d-423d-864e-79dc66d80bd4
Boot ID: 29ca0287-6f9c-4206-ab3e-728d94e8b1dd
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.16
Kubelet Version: v1.24.2
Kube-Proxy Version: v1.24.2
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-d46db594c-8tctz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system kindnet-vb8mm 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 27m
kube-system kube-proxy-b2f4f 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 19m kube-proxy
Normal Starting 27m kube-proxy
Normal Starting 4m25s kube-proxy
Normal NodeHasNoDiskPressure 27m (x8 over 27m) kubelet Node multinode-20220629181057-857010-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 27m (x8 over 27m) kubelet Node multinode-20220629181057-857010-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 19m (x2 over 19m) kubelet Node multinode-20220629181057-857010-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 19m (x2 over 19m) kubelet Node multinode-20220629181057-857010-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 19m (x2 over 19m) kubelet Node multinode-20220629181057-857010-m02 status is now: NodeHasSufficientMemory
Normal Starting 19m kubelet Starting kubelet.
Normal NodeReady 18m kubelet Node multinode-20220629181057-857010-m02 status is now: NodeReady
Normal Starting 4m27s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m27s (x2 over 4m27s) kubelet Node multinode-20220629181057-857010-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m27s (x2 over 4m27s) kubelet Node multinode-20220629181057-857010-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m27s (x2 over 4m27s) kubelet Node multinode-20220629181057-857010-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m27s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m7s kubelet Node multinode-20220629181057-857010-m02 status is now: NodeReady
*
* ==> dmesg <==
* [Jun29 18:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.065400] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.859682] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.568546] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.131288] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.518634] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.353016] systemd-fstab-generator[513]: Ignoring "noauto" for root device
[ +0.100184] systemd-fstab-generator[524]: Ignoring "noauto" for root device
[ +1.041928] systemd-fstab-generator[749]: Ignoring "noauto" for root device
[ +0.297250] systemd-fstab-generator[804]: Ignoring "noauto" for root device
[ +0.104432] systemd-fstab-generator[815]: Ignoring "noauto" for root device
[ +0.098963] systemd-fstab-generator[826]: Ignoring "noauto" for root device
[ +1.587805] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
[ +0.106444] systemd-fstab-generator[1017]: Ignoring "noauto" for root device
[ +5.097186] systemd-fstab-generator[1224]: Ignoring "noauto" for root device
[ +0.297110] kauditd_printk_skb: 67 callbacks suppressed
[ +10.389708] kauditd_printk_skb: 7 callbacks suppressed
*
* ==> etcd [1ddc144fa803] <==
* {"level":"info","ts":"2022-06-29T18:31:30.208Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"3bfdfb8084d9036b","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-06-29T18:31:30.209Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-06-29T18:31:30.212Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-06-29T18:31:30.213Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3bfdfb8084d9036b","initial-advertise-peer-urls":["https://192.168.39.130:2380"],"listen-peer-urls":["https://192.168.39.130:2380"],"advertise-client-urls":["https://192.168.39.130:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.130:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-06-29T18:31:30.213Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-06-29T18:31:30.214Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.130:2380"}
{"level":"info","ts":"2022-06-29T18:31:30.214Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.130:2380"}
{"level":"info","ts":"2022-06-29T18:31:30.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b switched to configuration voters=(4322887746748744555)"}
{"level":"info","ts":"2022-06-29T18:31:30.221Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b31a7968a7efeeee","local-member-id":"3bfdfb8084d9036b","added-peer-id":"3bfdfb8084d9036b","added-peer-peer-urls":["https://192.168.39.130:2380"]}
{"level":"info","ts":"2022-06-29T18:31:30.223Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b31a7968a7efeeee","local-member-id":"3bfdfb8084d9036b","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-29T18:31:30.223Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-29T18:31:31.168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b is starting a new election at term 3"}
{"level":"info","ts":"2022-06-29T18:31:31.168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became pre-candidate at term 3"}
{"level":"info","ts":"2022-06-29T18:31:31.168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b received MsgPreVoteResp from 3bfdfb8084d9036b at term 3"}
{"level":"info","ts":"2022-06-29T18:31:31.168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became candidate at term 4"}
{"level":"info","ts":"2022-06-29T18:31:31.168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b received MsgVoteResp from 3bfdfb8084d9036b at term 4"}
{"level":"info","ts":"2022-06-29T18:31:31.168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became leader at term 4"}
{"level":"info","ts":"2022-06-29T18:31:31.168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3bfdfb8084d9036b elected leader 3bfdfb8084d9036b at term 4"}
{"level":"info","ts":"2022-06-29T18:31:31.168Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3bfdfb8084d9036b","local-member-attributes":"{Name:multinode-20220629181057-857010 ClientURLs:[https://192.168.39.130:2379]}","request-path":"/0/members/3bfdfb8084d9036b/attributes","cluster-id":"b31a7968a7efeeee","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-29T18:31:31.169Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-29T18:31:31.170Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-29T18:31:31.171Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.130:2379"}
{"level":"info","ts":"2022-06-29T18:31:31.173Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-29T18:31:31.173Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-29T18:31:31.173Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> etcd [5f629f78d27b] <==
* {"level":"info","ts":"2022-06-29T18:16:44.751Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b31a7968a7efeeee","local-member-id":"3bfdfb8084d9036b","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-29T18:16:44.751Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-29T18:16:44.752Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-06-29T18:16:44.756Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.130:2380"}
{"level":"info","ts":"2022-06-29T18:16:44.756Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.130:2380"}
{"level":"info","ts":"2022-06-29T18:16:44.757Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3bfdfb8084d9036b","initial-advertise-peer-urls":["https://192.168.39.130:2380"],"listen-peer-urls":["https://192.168.39.130:2380"],"advertise-client-urls":["https://192.168.39.130:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.130:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-06-29T18:16:44.757Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-06-29T18:16:45.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b is starting a new election at term 2"}
{"level":"info","ts":"2022-06-29T18:16:45.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became pre-candidate at term 2"}
{"level":"info","ts":"2022-06-29T18:16:45.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b received MsgPreVoteResp from 3bfdfb8084d9036b at term 2"}
{"level":"info","ts":"2022-06-29T18:16:45.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became candidate at term 3"}
{"level":"info","ts":"2022-06-29T18:16:45.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b received MsgVoteResp from 3bfdfb8084d9036b at term 3"}
{"level":"info","ts":"2022-06-29T18:16:45.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3bfdfb8084d9036b became leader at term 3"}
{"level":"info","ts":"2022-06-29T18:16:45.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3bfdfb8084d9036b elected leader 3bfdfb8084d9036b at term 3"}
{"level":"info","ts":"2022-06-29T18:16:45.967Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3bfdfb8084d9036b","local-member-attributes":"{Name:multinode-20220629181057-857010 ClientURLs:[https://192.168.39.130:2379]}","request-path":"/0/members/3bfdfb8084d9036b/attributes","cluster-id":"b31a7968a7efeeee","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-29T18:16:45.967Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-29T18:16:45.969Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-29T18:16:45.969Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-06-29T18:16:45.969Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-29T18:16:45.970Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.130:2379"}
{"level":"info","ts":"2022-06-29T18:16:45.970Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-29T18:26:46.000Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1149}
{"level":"info","ts":"2022-06-29T18:26:46.027Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1149,"took":"25.883554ms"}
{"level":"info","ts":"2022-06-29T18:30:52.403Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-06-29T18:30:52.404Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-20220629181057-857010","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.130:2380"],"advertise-client-urls":["https://192.168.39.130:2379"]}
*
* ==> kernel <==
* 18:40:58 up 9 min, 0 users, load average: 0.25, 0.36, 0.21
Linux multinode-20220629181057-857010 5.10.57 #1 SMP Tue Jun 28 23:44:16 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [23d45e513e01] <==
* I0629 18:16:48.404012 1 naming_controller.go:291] Starting NamingConditionController
I0629 18:16:48.404118 1 establishing_controller.go:76] Starting EstablishingController
I0629 18:16:48.404145 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0629 18:16:48.404155 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0629 18:16:48.404166 1 crd_finalizer.go:266] Starting CRDFinalizer
I0629 18:16:48.405205 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0629 18:16:48.417326 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0629 18:16:48.493336 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0629 18:16:48.551526 1 shared_informer.go:262] Caches are synced for node_authorizer
I0629 18:16:48.561200 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0629 18:16:48.562446 1 cache.go:39] Caches are synced for autoregister controller
I0629 18:16:48.564467 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0629 18:16:48.565567 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0629 18:16:48.567978 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0629 18:16:48.573027 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0629 18:16:49.032278 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0629 18:16:49.369404 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0629 18:16:51.209551 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0629 18:16:51.462567 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0629 18:16:51.485271 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0629 18:16:51.621960 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0629 18:16:51.629945 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0629 18:16:51.642319 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0629 18:17:01.301194 1 controller.go:611] quota admission added evaluator for: endpoints
I0629 18:17:01.343719 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-apiserver [45c07f081f4b] <==
* I0629 18:31:33.620600 1 controller.go:85] Starting OpenAPI V3 controller
I0629 18:31:33.625237 1 naming_controller.go:291] Starting NamingConditionController
I0629 18:31:33.625423 1 establishing_controller.go:76] Starting EstablishingController
I0629 18:31:33.625673 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0629 18:31:33.625843 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0629 18:31:33.626485 1 crd_finalizer.go:266] Starting CRDFinalizer
I0629 18:31:33.727317 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0629 18:31:33.732596 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0629 18:31:33.737411 1 shared_informer.go:262] Caches are synced for node_authorizer
I0629 18:31:33.742395 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0629 18:31:33.742462 1 cache.go:39] Caches are synced for autoregister controller
I0629 18:31:33.742739 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
E0629 18:31:33.771012 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0629 18:31:33.778418 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0629 18:31:33.793236 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0629 18:31:34.269049 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0629 18:31:34.589937 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0629 18:31:36.080069 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0629 18:31:36.347538 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0629 18:31:36.369698 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0629 18:31:36.467350 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0629 18:31:36.477704 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0629 18:31:36.705633 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0629 18:31:46.180323 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0629 18:32:38.195641 1 controller.go:611] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [a57b8a22adde] <==
* I0629 18:17:11.229336 1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d-qrvhv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-6d4b75cb6d-qrvhv"
I0629 18:17:11.229347 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-945kp" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-945kp"
W0629 18:17:41.261084 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629181057-857010-m03 node
I0629 18:17:41.262804 1 event.go:294] "Event occurred" object="multinode-20220629181057-857010-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220629181057-857010-m02 status is now: NodeNotReady"
I0629 18:17:41.278712 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-8tctz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0629 18:17:41.296442 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-b2f4f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0629 18:17:41.318539 1 event.go:294] "Event occurred" object="kube-system/kindnet-vb8mm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0629 18:17:41.338560 1 event.go:294] "Event occurred" object="multinode-20220629181057-857010-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220629181057-857010-m03 status is now: NodeNotReady"
I0629 18:17:41.362693 1 event.go:294] "Event occurred" object="kube-system/kindnet-hxlqk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0629 18:17:41.374750 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-zqzcp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0629 18:21:45.667664 1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-pgppx"
W0629 18:21:49.555823 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220629181057-857010-m02" does not exist
I0629 18:21:49.557182 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-8tctz" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-8tctz"
I0629 18:21:49.567305 1 range_allocator.go:374] Set node multinode-20220629181057-857010-m02 PodCIDR to [10.244.1.0/24]
W0629 18:21:59.659054 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629181057-857010-m02 node
I0629 18:22:01.433342 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-8tctz" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-8tctz"
W0629 18:26:24.819331 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629181057-857010-m02 node
W0629 18:26:25.773243 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220629181057-857010-m03" does not exist
W0629 18:26:25.773283 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629181057-857010-m02 node
I0629 18:26:25.784376 1 range_allocator.go:374] Set node multinode-20220629181057-857010-m03 PodCIDR to [10.244.2.0/24]
W0629 18:26:46.402325 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629181057-857010-m03 node
I0629 18:26:46.493150 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-pgppx" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-pgppx"
I0629 18:30:48.988536 1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-rqxnk"
W0629 18:30:50.995718 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629181057-857010-m02 node
I0629 18:30:51.531993 1 event.go:294] "Event occurred" object="multinode-20220629181057-857010-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-20220629181057-857010-m03 event: Removing Node multinode-20220629181057-857010-m03 from Controller"
*
* ==> kube-controller-manager [b39c8f2928e9] <==
* I0629 18:31:46.312801 1 shared_informer.go:262] Caches are synced for endpoint
I0629 18:31:46.332067 1 shared_informer.go:262] Caches are synced for attach detach
I0629 18:31:46.341515 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0629 18:31:46.367286 1 shared_informer.go:262] Caches are synced for resource quota
I0629 18:31:46.378950 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0629 18:31:46.404494 1 shared_informer.go:262] Caches are synced for disruption
I0629 18:31:46.404594 1 disruption.go:371] Sending events to api server.
I0629 18:31:46.416518 1 shared_informer.go:262] Caches are synced for resource quota
I0629 18:31:46.421657 1 shared_informer.go:262] Caches are synced for deployment
I0629 18:31:46.817307 1 shared_informer.go:262] Caches are synced for garbage collector
I0629 18:31:46.817359 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0629 18:31:46.863000 1 shared_informer.go:262] Caches are synced for garbage collector
W0629 18:31:54.016644 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629181057-857010-m02 node
I0629 18:32:26.212018 1 event.go:294] "Event occurred" object="multinode-20220629181057-857010-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220629181057-857010-m02 status is now: NodeNotReady"
I0629 18:32:26.228246 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-b2f4f" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0629 18:32:26.245753 1 event.go:294] "Event occurred" object="kube-system/kindnet-vb8mm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0629 18:32:26.265513 1 gc_controller.go:81] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-zqzcp"
I0629 18:32:26.282430 1 gc_controller.go:239] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-zqzcp"
I0629 18:32:26.283192 1 gc_controller.go:81] "PodGC is force deleting Pod" pod="kube-system/kindnet-hxlqk"
I0629 18:32:26.302752 1 gc_controller.go:239] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-hxlqk"
W0629 18:36:31.237994 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220629181057-857010-m02" does not exist
I0629 18:36:31.239611 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-8tctz" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-8tctz"
I0629 18:36:31.249034 1 range_allocator.go:374] Set node multinode-20220629181057-857010-m02 PodCIDR to [10.244.1.0/24]
W0629 18:36:51.505689 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629181057-857010-m02 node
I0629 18:36:56.299297 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-8tctz" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-8tctz"
*
* ==> kube-proxy [8d5231f343cd] <==
* I0629 18:16:51.338998 1 node.go:163] Successfully retrieved node IP: 192.168.39.130
I0629 18:16:51.339204 1 server_others.go:138] "Detected node IP" address="192.168.39.130"
I0629 18:16:51.339606 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0629 18:16:51.540076 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0629 18:16:51.540148 1 server_others.go:206] "Using iptables Proxier"
I0629 18:16:51.544503 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0629 18:16:51.566675 1 server.go:661] "Version info" version="v1.24.2"
I0629 18:16:51.567167 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0629 18:16:51.576210 1 config.go:226] "Starting endpoint slice config controller"
I0629 18:16:51.580511 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0629 18:16:51.580598 1 config.go:317] "Starting service config controller"
I0629 18:16:51.580606 1 shared_informer.go:255] Waiting for caches to sync for service config
I0629 18:16:51.581380 1 config.go:444] "Starting node config controller"
I0629 18:16:51.581388 1 shared_informer.go:255] Waiting for caches to sync for node config
I0629 18:16:51.682624 1 shared_informer.go:262] Caches are synced for node config
I0629 18:16:51.682756 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0629 18:16:51.682785 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-proxy [96b2c380b5cb] <==
* I0629 18:31:36.626656 1 node.go:163] Successfully retrieved node IP: 192.168.39.130
I0629 18:31:36.626703 1 server_others.go:138] "Detected node IP" address="192.168.39.130"
I0629 18:31:36.626813 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0629 18:31:36.694966 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0629 18:31:36.695008 1 server_others.go:206] "Using iptables Proxier"
I0629 18:31:36.695638 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0629 18:31:36.696671 1 server.go:661] "Version info" version="v1.24.2"
I0629 18:31:36.696710 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0629 18:31:36.699313 1 config.go:317] "Starting service config controller"
I0629 18:31:36.699681 1 shared_informer.go:255] Waiting for caches to sync for service config
I0629 18:31:36.699839 1 config.go:226] "Starting endpoint slice config controller"
I0629 18:31:36.699990 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0629 18:31:36.702309 1 config.go:444] "Starting node config controller"
I0629 18:31:36.702319 1 shared_informer.go:255] Waiting for caches to sync for node config
I0629 18:31:36.801604 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0629 18:31:36.801646 1 shared_informer.go:262] Caches are synced for service config
I0629 18:31:36.804011 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [878241b32dd1] <==
* I0629 18:16:44.913436 1 serving.go:348] Generated self-signed cert in-memory
W0629 18:16:48.474977 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0629 18:16:48.476129 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0629 18:16:48.476223 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0629 18:16:48.476239 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0629 18:16:48.507094 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
I0629 18:16:48.507290 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0629 18:16:48.509457 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0629 18:16:48.509565 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0629 18:16:48.510034 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0629 18:16:48.509583 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0629 18:16:48.610374 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [df5437afad83] <==
* W0629 18:31:29.958253 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.39.130:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.963826 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.130:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.958340 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.39.130:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.963848 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.130:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.958398 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.39.130:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.963957 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.130:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.958456 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.39.130:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.963977 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.130:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.958528 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.39.130:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.963991 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.130:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.958686 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.39.130:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.964005 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.130:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.958822 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.130:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.964022 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.130:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.958984 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.130:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.964037 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.130:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.959456 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.130:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.964050 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.130:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.962135 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.39.130:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.964060 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.130:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.963109 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.39.130:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.964072 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.130:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
W0629 18:31:29.964169 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.130:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
E0629 18:31:29.964245 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.130:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.130:8443: connect: connection refused
I0629 18:31:33.840736 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Wed 2022-06-29 18:31:09 UTC, ends at Wed 2022-06-29 18:40:59 UTC. --
Jun 29 18:40:15 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:15.547404 1230 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0fcb7440-28fe-48b2-ae4c-ea48cabb2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-945kp_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-d46db594c-945kp" podUID=0fcb7440-28fe-48b2-ae4c-ea48cabb2acc
Jun 29 18:40:22 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:22.548140 1230 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-6d4b75cb6d-qrvhv_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="bff2c7259ea155a4fed754770e28ca6c5d6d702bb612ab5fe12df8181803a35c"
Jun 29 18:40:22 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:22.548205 1230 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:bff2c7259ea155a4fed754770e28ca6c5d6d702bb612ab5fe12df8181803a35c}
Jun 29 18:40:22 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:22.548236 1230 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"145bd891-a2f4-42fa-b93e-6760c89822e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-qrvhv_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Jun 29 18:40:22 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:22.548265 1230 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"145bd891-a2f4-42fa-b93e-6760c89822e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-qrvhv_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-6d4b75cb6d-qrvhv" podUID=145bd891-a2f4-42fa-b93e-6760c89822e2
Jun 29 18:40:30 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:30.547613 1230 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-d46db594c-945kp_default\" network: could not retrieve port mappings: key is not found" podSandboxID="2a017a073d6506e19ffac482dfe025ee0f77b2f92926de6e697ff32b9896a166"
Jun 29 18:40:30 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:30.548153 1230 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:2a017a073d6506e19ffac482dfe025ee0f77b2f92926de6e697ff32b9896a166}
Jun 29 18:40:30 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:30.548226 1230 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0fcb7440-28fe-48b2-ae4c-ea48cabb2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-945kp_default\\\" network: could not retrieve port mappings: key is not found\""
Jun 29 18:40:30 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:30.548281 1230 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0fcb7440-28fe-48b2-ae4c-ea48cabb2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-945kp_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-d46db594c-945kp" podUID=0fcb7440-28fe-48b2-ae4c-ea48cabb2acc
Jun 29 18:40:35 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:35.547051 1230 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-6d4b75cb6d-qrvhv_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="bff2c7259ea155a4fed754770e28ca6c5d6d702bb612ab5fe12df8181803a35c"
Jun 29 18:40:35 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:35.547112 1230 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:bff2c7259ea155a4fed754770e28ca6c5d6d702bb612ab5fe12df8181803a35c}
Jun 29 18:40:35 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:35.547144 1230 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"145bd891-a2f4-42fa-b93e-6760c89822e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-qrvhv_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Jun 29 18:40:35 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:35.547168 1230 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"145bd891-a2f4-42fa-b93e-6760c89822e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-qrvhv_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-6d4b75cb6d-qrvhv" podUID=145bd891-a2f4-42fa-b93e-6760c89822e2
Jun 29 18:40:43 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:43.546710 1230 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-d46db594c-945kp_default\" network: could not retrieve port mappings: key is not found" podSandboxID="2a017a073d6506e19ffac482dfe025ee0f77b2f92926de6e697ff32b9896a166"
Jun 29 18:40:43 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:43.547100 1230 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:2a017a073d6506e19ffac482dfe025ee0f77b2f92926de6e697ff32b9896a166}
Jun 29 18:40:43 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:43.547170 1230 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0fcb7440-28fe-48b2-ae4c-ea48cabb2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-945kp_default\\\" network: could not retrieve port mappings: key is not found\""
Jun 29 18:40:43 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:43.547224 1230 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0fcb7440-28fe-48b2-ae4c-ea48cabb2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-945kp_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-d46db594c-945kp" podUID=0fcb7440-28fe-48b2-ae4c-ea48cabb2acc
Jun 29 18:40:48 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:48.548982 1230 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-6d4b75cb6d-qrvhv_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="bff2c7259ea155a4fed754770e28ca6c5d6d702bb612ab5fe12df8181803a35c"
Jun 29 18:40:48 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:48.549309 1230 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:bff2c7259ea155a4fed754770e28ca6c5d6d702bb612ab5fe12df8181803a35c}
Jun 29 18:40:48 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:48.549374 1230 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"145bd891-a2f4-42fa-b93e-6760c89822e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-qrvhv_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Jun 29 18:40:48 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:48.549428 1230 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"145bd891-a2f4-42fa-b93e-6760c89822e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-qrvhv_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-6d4b75cb6d-qrvhv" podUID=145bd891-a2f4-42fa-b93e-6760c89822e2
Jun 29 18:40:56 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:56.548228 1230 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-d46db594c-945kp_default\" network: could not retrieve port mappings: key is not found" podSandboxID="2a017a073d6506e19ffac482dfe025ee0f77b2f92926de6e697ff32b9896a166"
Jun 29 18:40:56 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:56.548744 1230 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:2a017a073d6506e19ffac482dfe025ee0f77b2f92926de6e697ff32b9896a166}
Jun 29 18:40:56 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:56.549009 1230 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0fcb7440-28fe-48b2-ae4c-ea48cabb2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-945kp_default\\\" network: could not retrieve port mappings: key is not found\""
Jun 29 18:40:56 multinode-20220629181057-857010 kubelet[1230]: E0629 18:40:56.549195 1230 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0fcb7440-28fe-48b2-ae4c-ea48cabb2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-945kp_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-d46db594c-945kp" podUID=0fcb7440-28fe-48b2-ae4c-ea48cabb2acc
*
* ==> storage-provisioner [2746239a50f6] <==
* I0629 18:32:20.736391 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0629 18:32:20.757224 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0629 18:32:20.757999 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0629 18:32:38.199573 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0629 18:32:38.200064 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220629181057-857010_88e5f022-508a-413b-8205-642f9648de45!
I0629 18:32:38.202238 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9abcf939-933b-434f-9d2d-d352a3d9dedc", APIVersion:"v1", ResourceVersion:"1926", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220629181057-857010_88e5f022-508a-413b-8205-642f9648de45 became leader
I0629 18:32:38.301338 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220629181057-857010_88e5f022-508a-413b-8205-642f9648de45!
*
* ==> storage-provisioner [cb414583dee0] <==
* I0629 18:31:36.630974 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0629 18:32:06.649019 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20220629181057-857010 -n multinode-20220629181057-857010
helpers_test.go:261: (dbg) Run: kubectl --context multinode-20220629181057-857010 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-d46db594c-rqxnk
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context multinode-20220629181057-857010 describe pod busybox-d46db594c-rqxnk
helpers_test.go:280: (dbg) kubectl --context multinode-20220629181057-857010 describe pod busybox-d46db594c-rqxnk:
-- stdout --
Name: busybox-d46db594c-rqxnk
Namespace: default
Priority: 0
Node: <none>
Labels: app=busybox
pod-template-hash=d46db594c
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/busybox-d46db594c
Containers:
busybox:
Image: gcr.io/k8s-minikube/busybox:1.28
Port: <none>
Host Port: <none>
Command:
sleep
3600
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q4crf (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-q4crf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 8m31s (x2 over 8m33s) default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 4m8s (x2 over 9m26s) default-scheduler 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (6.23s)