=== RUN TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-20220728204317-10421
multinode_test.go:450: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220728204317-10421-m02 --driver=kvm2
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220728204317-10421-m02 --driver=kvm2 : exit status 14 (81.723878ms)
-- stdout --
* [multinode-20220728204317-10421-m02] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14555
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
-- /stdout --
** stderr **
! Profile name 'multinode-20220728204317-10421-m02' is duplicated with machine name 'multinode-20220728204317-10421-m02' in profile 'multinode-20220728204317-10421'
X Exiting due to MK_USAGE: Profile name should be unique
** /stderr **
multinode_test.go:458: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20220728204317-10421-m03 --driver=kvm2
E0728 21:13:11.012384 10421 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203144-10421/client.crt: no such file or directory
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220728204317-10421-m03 --driver=kvm2 : signal: killed (12.972796187s)
-- stdout --
* [multinode-20220728204317-10421-m03] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14555
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the kvm2 driver based on user configuration
* Starting control plane node multinode-20220728204317-10421-m03 in cluster multinode-20220728204317-10421-m03
* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
-- /stdout --
multinode_test.go:460: failed to start profile. args "out/minikube-linux-amd64 start -p multinode-20220728204317-10421-m03 --driver=kvm2 " : signal: killed
multinode_test.go:465: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-20220728204317-10421
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220728204317-10421: context deadline exceeded (1.317µs)
multinode_test.go:470: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-20220728204317-10421-m03
multinode_test.go:470: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p multinode-20220728204317-10421-m03: context deadline exceeded (154ns)
multinode_test.go:472: failed to clean temporary profile. args "out/minikube-linux-amd64 delete -p multinode-20220728204317-10421-m03" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20220728204317-10421 -n multinode-20220728204317-10421
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-20220728204317-10421 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220728204317-10421 logs -n 25: (1.322497141s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------------------------------------------------|------------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------------------------------------------------|------------------------------------|---------|---------|---------------------|---------------------|
| cp | multinode-20220728204317-10421 cp multinode-20220728204317-10421-m02:/home/docker/cp-test.txt | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | multinode-20220728204317-10421-m03:/home/docker/cp-test_multinode-20220728204317-10421-m02_multinode-20220728204317-10421-m03.txt | | | | | |
| ssh | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | ssh -n | | | | | |
| | multinode-20220728204317-10421-m02 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| ssh | multinode-20220728204317-10421 ssh -n multinode-20220728204317-10421-m03 sudo cat | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | /home/docker/cp-test_multinode-20220728204317-10421-m02_multinode-20220728204317-10421-m03.txt | | | | | |
| cp | multinode-20220728204317-10421 cp testdata/cp-test.txt | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | multinode-20220728204317-10421-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | ssh -n | | | | | |
| | multinode-20220728204317-10421-m03 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| cp | multinode-20220728204317-10421 cp multinode-20220728204317-10421-m03:/home/docker/cp-test.txt | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | /tmp/TestMultiNodeserialCopyFile2561602106/001/cp-test_multinode-20220728204317-10421-m03.txt | | | | | |
| ssh | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | ssh -n | | | | | |
| | multinode-20220728204317-10421-m03 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| cp | multinode-20220728204317-10421 cp multinode-20220728204317-10421-m03:/home/docker/cp-test.txt | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | multinode-20220728204317-10421:/home/docker/cp-test_multinode-20220728204317-10421-m03_multinode-20220728204317-10421.txt | | | | | |
| ssh | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | ssh -n | | | | | |
| | multinode-20220728204317-10421-m03 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| ssh | multinode-20220728204317-10421 ssh -n multinode-20220728204317-10421 sudo cat | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | /home/docker/cp-test_multinode-20220728204317-10421-m03_multinode-20220728204317-10421.txt | | | | | |
| cp | multinode-20220728204317-10421 cp multinode-20220728204317-10421-m03:/home/docker/cp-test.txt | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | multinode-20220728204317-10421-m02:/home/docker/cp-test_multinode-20220728204317-10421-m03_multinode-20220728204317-10421-m02.txt | | | | | |
| ssh | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | ssh -n | | | | | |
| | multinode-20220728204317-10421-m03 | | | | | |
| | sudo cat /home/docker/cp-test.txt | | | | | |
| ssh | multinode-20220728204317-10421 ssh -n multinode-20220728204317-10421-m02 sudo cat | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | /home/docker/cp-test_multinode-20220728204317-10421-m03_multinode-20220728204317-10421-m02.txt | | | | | |
| node | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:47 UTC |
| | node stop m03 | | | | | |
| node | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:47 UTC | 28 Jul 22 20:48 UTC |
| | node start m03 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:48 UTC | |
| | multinode-20220728204317-10421 | | | | | |
| stop | -p | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:48 UTC | 28 Jul 22 20:48 UTC |
| | multinode-20220728204317-10421 | | | | | |
| start | -p | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 20:48 UTC | 28 Jul 22 21:02 UTC |
| | multinode-20220728204317-10421 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 21:02 UTC | |
| | multinode-20220728204317-10421 | | | | | |
| node | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 21:02 UTC | 28 Jul 22 21:03 UTC |
| | node delete m03 | | | | | |
| stop | multinode-20220728204317-10421 | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | 28 Jul 22 21:03 UTC |
| | stop | | | | | |
| start | -p | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | 28 Jul 22 21:13 UTC |
| | multinode-20220728204317-10421 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| node | list -p | multinode-20220728204317-10421 | jenkins | v1.26.0 | 28 Jul 22 21:13 UTC | |
| | multinode-20220728204317-10421 | | | | | |
| start | -p | multinode-20220728204317-10421-m02 | jenkins | v1.26.0 | 28 Jul 22 21:13 UTC | |
| | multinode-20220728204317-10421-m02 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p | multinode-20220728204317-10421-m03 | jenkins | v1.26.0 | 28 Jul 22 21:13 UTC | |
| | multinode-20220728204317-10421-m03 | | | | | |
| | --driver=kvm2 | | | | | |
|---------|-----------------------------------------------------------------------------------------------------------------------------------|------------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/28 21:13:04
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0728 21:13:04.780883 22566 out.go:296] Setting OutFile to fd 1 ...
I0728 21:13:04.780971 22566 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:13:04.780974 22566 out.go:309] Setting ErrFile to fd 2...
I0728 21:13:04.780978 22566 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:13:04.781085 22566 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
I0728 21:13:04.781610 22566 out.go:303] Setting JSON to false
I0728 21:13:04.782358 22566 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3334,"bootTime":1659039451,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0728 21:13:04.782415 22566 start.go:125] virtualization: kvm guest
I0728 21:13:04.784864 22566 out.go:177] * [multinode-20220728204317-10421-m03] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0728 21:13:04.786635 22566 out.go:177] - MINIKUBE_LOCATION=14555
I0728 21:13:04.786649 22566 notify.go:193] Checking for updates...
I0728 21:13:04.788023 22566 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0728 21:13:04.789685 22566 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 21:13:04.791192 22566 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
I0728 21:13:04.792799 22566 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0728 21:13:04.794667 22566 config.go:178] Loaded profile config "multinode-20220728204317-10421": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 21:13:04.794740 22566 driver.go:365] Setting default libvirt URI to qemu:///system
I0728 21:13:04.829935 22566 out.go:177] * Using the kvm2 driver based on user configuration
I0728 21:13:04.831320 22566 start.go:284] selected driver: kvm2
I0728 21:13:04.831325 22566 start.go:808] validating driver "kvm2" against <nil>
I0728 21:13:04.831337 22566 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0728 21:13:04.831584 22566 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0728 21:13:04.831801 22566 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0728 21:13:04.845747 22566 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.26.0
I0728 21:13:04.845790 22566 start_flags.go:296] no existing cluster config was found, will generate one from the flags
I0728 21:13:04.846212 22566 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32103MB, container=0MB
I0728 21:13:04.846301 22566 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
I0728 21:13:04.846318 22566 cni.go:95] Creating CNI manager for ""
I0728 21:13:04.846325 22566 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0728 21:13:04.846330 22566 start_flags.go:310] config:
{Name:multinode-20220728204317-10421-m03 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728204317-10421-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 21:13:04.846422 22566 iso.go:128] acquiring lock: {Name:mkfce04bbb491925dd52e306b36022050357481a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0728 21:13:04.848237 22566 out.go:177] * Starting control plane node multinode-20220728204317-10421-m03 in cluster multinode-20220728204317-10421-m03
I0728 21:13:04.849690 22566 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
I0728 21:13:04.849712 22566 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
I0728 21:13:04.849728 22566 cache.go:57] Caching tarball of preloaded images
I0728 21:13:04.849839 22566 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0728 21:13:04.849859 22566 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on docker
I0728 21:13:04.849945 22566 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728204317-10421-m03/config.json ...
I0728 21:13:04.849958 22566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728204317-10421-m03/config.json: {Name:mk9f389efd5c9afaf4ab66e17d286d1484d5a046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 21:13:04.850090 22566 cache.go:208] Successfully downloaded all kic artifacts
I0728 21:13:04.850111 22566 start.go:370] acquiring machines lock for multinode-20220728204317-10421-m03: {Name:mkbf5c4d57c05fcf0fa2e6f9b0aff57d99a94f12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0728 21:13:04.850158 22566 start.go:374] acquired machines lock for "multinode-20220728204317-10421-m03" in 35.21µs
I0728 21:13:04.850171 22566 start.go:92] Provisioning new machine with config: &{Name:multinode-20220728204317-10421-m03 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14534/minikube-v1.26.0-1657340101-14534-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPor
t:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728204317-10421-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0728 21:13:04.850237 22566 start.go:132] createHost starting for "" (driver="kvm2")
I0728 21:13:04.851986 22566 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
I0728 21:13:04.852085 22566 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0728 21:13:04.852117 22566 main.go:134] libmachine: Launching plugin server for driver kvm2
I0728 21:13:04.866001 22566 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39079
I0728 21:13:04.866422 22566 main.go:134] libmachine: () Calling .GetVersion
I0728 21:13:04.866945 22566 main.go:134] libmachine: Using API Version 1
I0728 21:13:04.866960 22566 main.go:134] libmachine: () Calling .SetConfigRaw
I0728 21:13:04.867273 22566 main.go:134] libmachine: () Calling .GetMachineName
I0728 21:13:04.867427 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Calling .GetMachineName
I0728 21:13:04.867568 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Calling .DriverName
I0728 21:13:04.867677 22566 start.go:166] libmachine.API.Create for "multinode-20220728204317-10421-m03" (driver="kvm2")
I0728 21:13:04.867698 22566 client.go:168] LocalClient.Create starting
I0728 21:13:04.867718 22566 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem
I0728 21:13:04.867737 22566 main.go:134] libmachine: Decoding PEM data...
I0728 21:13:04.867747 22566 main.go:134] libmachine: Parsing certificate...
I0728 21:13:04.867791 22566 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem
I0728 21:13:04.867806 22566 main.go:134] libmachine: Decoding PEM data...
I0728 21:13:04.867814 22566 main.go:134] libmachine: Parsing certificate...
I0728 21:13:04.867831 22566 main.go:134] libmachine: Running pre-create checks...
I0728 21:13:04.867837 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Calling .PreCreateCheck
I0728 21:13:04.868154 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Calling .GetConfigRaw
I0728 21:13:04.868493 22566 main.go:134] libmachine: Creating machine...
I0728 21:13:04.868500 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Calling .Create
I0728 21:13:04.868637 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Creating KVM machine...
I0728 21:13:04.869694 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | found existing default KVM network
I0728 21:13:04.870503 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:04.870365 22589 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:3a:96}}
I0728 21:13:04.871201 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:04.871125 22589 network.go:288] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc0001882e8] misses:0}
I0728 21:13:04.871221 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:04.871168 22589 network.go:235] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0728 21:13:04.876233 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | trying to create private KVM network mk-multinode-20220728204317-10421-m03 192.168.50.0/24...
I0728 21:13:04.944390 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | private KVM network mk-multinode-20220728204317-10421-m03 192.168.50.0/24 created
I0728 21:13:04.944423 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204317-10421-m03 ...
I0728 21:13:04.944441 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:04.944334 22589 common.go:107] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
I0728 21:13:04.944465 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/iso/amd64/minikube-v1.26.0-1657340101-14534-amd64.iso
I0728 21:13:04.944487 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/iso/amd64/minikube-v1.26.0-1657340101-14534-amd64.iso...
I0728 21:13:05.128877 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:05.128711 22589 common.go:114] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204317-10421-m03/id_rsa...
I0728 21:13:05.204661 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:05.204557 22589 common.go:120] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204317-10421-m03/multinode-20220728204317-10421-m03.rawdisk...
I0728 21:13:05.204692 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Writing magic tar header
I0728 21:13:05.204703 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Writing SSH key tar header
I0728 21:13:05.204713 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:05.204665 22589 common.go:134] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204317-10421-m03 ...
I0728 21:13:05.204777 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204317-10421-m03
I0728 21:13:05.204826 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204317-10421-m03 (perms=drwx------)
I0728 21:13:05.204844 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines (perms=drwxrwxr-x)
I0728 21:13:05.204863 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines
I0728 21:13:05.204876 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube (perms=drwxr-xr-x)
I0728 21:13:05.204899 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd (perms=drwxrwxr-x)
I0728 21:13:05.204916 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0728 21:13:05.204930 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0728 21:13:05.204939 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
I0728 21:13:05.204950 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd
I0728 21:13:05.204958 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0728 21:13:05.204969 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Checking permissions on dir: /home/jenkins
I0728 21:13:05.204979 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Checking permissions on dir: /home
I0728 21:13:05.204991 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | Skipping /home - not owner
I0728 21:13:05.205007 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Creating domain...
I0728 21:13:05.206013 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) define libvirt domain using xml:
I0728 21:13:05.206031 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <domain type='kvm'>
I0728 21:13:05.206039 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <name>multinode-20220728204317-10421-m03</name>
I0728 21:13:05.206044 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <memory unit='MiB'>6000</memory>
I0728 21:13:05.206049 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <vcpu>2</vcpu>
I0728 21:13:05.206054 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <features>
I0728 21:13:05.206059 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <acpi/>
I0728 21:13:05.206063 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <apic/>
I0728 21:13:05.206068 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <pae/>
I0728 21:13:05.206073 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03)
I0728 21:13:05.206084 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </features>
I0728 21:13:05.206089 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <cpu mode='host-passthrough'>
I0728 21:13:05.206094 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03)
I0728 21:13:05.206102 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </cpu>
I0728 21:13:05.206107 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <os>
I0728 21:13:05.206112 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <type>hvm</type>
I0728 21:13:05.206117 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <boot dev='cdrom'/>
I0728 21:13:05.206122 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <boot dev='hd'/>
I0728 21:13:05.206144 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <bootmenu enable='no'/>
I0728 21:13:05.206156 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </os>
I0728 21:13:05.206163 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <devices>
I0728 21:13:05.206169 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <disk type='file' device='cdrom'>
I0728 21:13:05.206189 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <source file='/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204317-10421-m03/boot2docker.iso'/>
I0728 21:13:05.206196 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <target dev='hdc' bus='scsi'/>
I0728 21:13:05.206202 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <readonly/>
I0728 21:13:05.206207 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </disk>
I0728 21:13:05.206213 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <disk type='file' device='disk'>
I0728 21:13:05.206219 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <driver name='qemu' type='raw' cache='default' io='threads' />
I0728 21:13:05.206230 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <source file='/home/jenkins/minikube-integration/linux-amd64-kvm2--14555-3487-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204317-10421-m03/multinode-20220728204317-10421-m03.rawdisk'/>
I0728 21:13:05.206240 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <target dev='hda' bus='virtio'/>
I0728 21:13:05.206246 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </disk>
I0728 21:13:05.206251 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <interface type='network'>
I0728 21:13:05.206257 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <source network='mk-multinode-20220728204317-10421-m03'/>
I0728 21:13:05.206265 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <model type='virtio'/>
I0728 21:13:05.206271 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </interface>
I0728 21:13:05.206276 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <interface type='network'>
I0728 21:13:05.206281 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <source network='default'/>
I0728 21:13:05.206286 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <model type='virtio'/>
I0728 21:13:05.206291 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </interface>
I0728 21:13:05.206317 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <serial type='pty'>
I0728 21:13:05.206323 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <target port='0'/>
I0728 21:13:05.206327 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </serial>
I0728 21:13:05.206333 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <console type='pty'>
I0728 21:13:05.206338 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <target type='serial' port='0'/>
I0728 21:13:05.206343 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </console>
I0728 21:13:05.206347 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <rng model='virtio'>
I0728 21:13:05.206354 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) <backend model='random'>/dev/random</backend>
I0728 21:13:05.206358 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </rng>
I0728 21:13:05.206363 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03)
I0728 21:13:05.206367 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03)
I0728 21:13:05.206372 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </devices>
I0728 21:13:05.206376 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) </domain>
I0728 21:13:05.206404 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03)
I0728 21:13:05.210970 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:26:21:ef in network default
I0728 21:13:05.211522 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Ensuring networks are active...
I0728 21:13:05.211539 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:05.212173 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Ensuring network default is active
I0728 21:13:05.212527 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Ensuring network mk-multinode-20220728204317-10421-m03 is active
I0728 21:13:05.213064 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Getting domain xml...
I0728 21:13:05.213799 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Creating domain...
I0728 21:13:06.456119 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) Waiting to get IP...
I0728 21:13:06.456890 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:06.457301 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:06.457325 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:06.457253 22589 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
I0728 21:13:06.721681 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:06.722088 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:06.722106 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:06.722042 22589 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
I0728 21:13:07.105337 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:07.105744 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:07.105761 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:07.105675 22589 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
I0728 21:13:07.530375 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:07.530819 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:07.530841 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:07.530768 22589 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
I0728 21:13:08.005343 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:08.005792 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:08.005821 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:08.005749 22589 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
I0728 21:13:08.594464 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:08.594870 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:08.594886 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:08.594833 22589 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
I0728 21:13:09.430168 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:09.430559 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:09.430578 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:09.430516 22589 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
I0728 21:13:10.179024 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:10.179453 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:10.179480 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:10.179393 22589 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
I0728 21:13:11.168289 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:11.168731 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:11.168754 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:11.168701 22589 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
I0728 21:13:12.359798 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:12.360376 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:12.360402 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:12.360318 22589 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
I0728 21:13:14.039012 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | domain multinode-20220728204317-10421-m03 has defined MAC address 52:54:00:e9:01:d5 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:14.039501 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | unable to find current IP address of domain multinode-20220728204317-10421-m03 in network mk-multinode-20220728204317-10421-m03
I0728 21:13:14.039530 22566 main.go:134] libmachine: (multinode-20220728204317-10421-m03) DBG | I0728 21:13:14.039446 22589 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
*
* ==> Docker <==
* -- Journal begins at Thu 2022-07-28 21:03:20 UTC, ends at Thu 2022-07-28 21:13:18 UTC. --
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.313534988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.313583092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.313597739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.313863664Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e5ab06c52ed0714ab8ac47be0dfe93c64301206122ef40ff530b6ea3429dad99 pid=2117 runtime=io.containerd.runc.v2
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.563390791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.563610125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.563623049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.563993857Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ba8c9569e254a1b7317fb1afe386e60111c654655e59495568144dac880c0081 pid=2166 runtime=io.containerd.runc.v2
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.679076314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.679160135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.679170971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 28 21:03:48 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:48.679938022Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c38b38cb6c4b74bea113fede167ff780aab349092211c5975a000fa77c8a4f98 pid=2203 runtime=io.containerd.runc.v2
Jul 28 21:03:51 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:51.016062315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 28 21:03:51 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:51.016234082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 28 21:03:51 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:51.016258656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 28 21:03:51 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:03:51.016610828Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/12315a4d55b7f1ceb6057995cad828b5e64de8899990ab4c029c6d7ff299a9d8 pid=2361 runtime=io.containerd.runc.v2
Jul 28 21:04:18 multinode-20220728204317-10421 dockerd[829]: time="2022-07-28T21:04:18.876028682Z" level=info msg="ignoring event" container=c38b38cb6c4b74bea113fede167ff780aab349092211c5975a000fa77c8a4f98 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 21:04:18 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:04:18.877432833Z" level=info msg="shim disconnected" id=c38b38cb6c4b74bea113fede167ff780aab349092211c5975a000fa77c8a4f98
Jul 28 21:04:18 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:04:18.877931300Z" level=warning msg="cleaning up after shim disconnected" id=c38b38cb6c4b74bea113fede167ff780aab349092211c5975a000fa77c8a4f98 namespace=moby
Jul 28 21:04:18 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:04:18.877948383Z" level=info msg="cleaning up dead shim"
Jul 28 21:04:18 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:04:18.890373762Z" level=warning msg="cleanup warnings time=\"2022-07-28T21:04:18Z\" level=info msg=\"starting signal loop\" namespace=moby pid=2769 runtime=io.containerd.runc.v2\n"
Jul 28 21:04:33 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:04:33.239473116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 28 21:04:33 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:04:33.240079947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 28 21:04:33 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:04:33.240254135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 28 21:04:33 multinode-20220728204317-10421 dockerd[835]: time="2022-07-28T21:04:33.241115709Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/54e86224ae8d388803815128e3662faed96947ba17ad7ca5d0375b7086471c90 pid=2946 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
54e86224ae8d3 6e38f40d628db 8 minutes ago Running storage-provisioner 4 bda722edb4aa7
12315a4d55b7f 6fb66cd78abfe 9 minutes ago Running kindnet-cni 2 00695117c1a34
c38b38cb6c4b7 6e38f40d628db 9 minutes ago Exited storage-provisioner 3 bda722edb4aa7
ba8c9569e254a 2ae1ba6417cbc 9 minutes ago Running kube-proxy 2 e5ab06c52ed07
ff3e6233771c3 aebe758cef4cd 9 minutes ago Running etcd 2 b2e9a0f5afd27
86f2d7637e3db 3a5aa3a515f5d 9 minutes ago Running kube-scheduler 2 d377ee4b2e02c
a836c5cf561c4 586c112956dfc 9 minutes ago Running kube-controller-manager 2 9fa391a5a9e35
89e211959d814 d521dd763e2e3 9 minutes ago Running kube-apiserver 2 8f4dfc478f690
981d37ac7702f 6fb66cd78abfe 24 minutes ago Exited kindnet-cni 1 d4248299c77ca
fa9f8dcb786bf 2ae1ba6417cbc 24 minutes ago Exited kube-proxy 1 51374c0c3cc17
2bc6fa43faeb1 3a5aa3a515f5d 24 minutes ago Exited kube-scheduler 1 dab3fca10ad2b
f9ccc91657ad1 586c112956dfc 24 minutes ago Exited kube-controller-manager 1 cfa3bbfa96948
52cc69f9f3386 aebe758cef4cd 24 minutes ago Exited etcd 1 ad1c236bef978
d410a7b73db19 d521dd763e2e3 24 minutes ago Exited kube-apiserver 1 5aa56da0aea78
c7e6defa68808 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 27 minutes ago Exited busybox 0 92ccbb14f82ea
67f910c62f3d6 a4ca41631cc7a 28 minutes ago Exited coredns 0 761ea68543558
*
* ==> coredns [67f910c62f3d] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: multinode-20220728204317-10421
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-20220728204317-10421
kubernetes.io/os=linux
minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
minikube.k8s.io/name=multinode-20220728204317-10421
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_07_28T20_44_12_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 28 Jul 2022 20:44:09 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-20220728204317-10421
AcquireTime: <unset>
RenewTime: Thu, 28 Jul 2022 21:13:16 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 28 Jul 2022 21:09:11 +0000 Thu, 28 Jul 2022 20:44:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 28 Jul 2022 21:09:11 +0000 Thu, 28 Jul 2022 20:44:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 28 Jul 2022 21:09:11 +0000 Thu, 28 Jul 2022 20:44:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 28 Jul 2022 21:09:11 +0000 Thu, 28 Jul 2022 21:04:05 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.3
Hostname: multinode-20220728204317-10421
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165916Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165916Ki
pods: 110
System Info:
Machine ID: b93f0c8a58a1485fb56370bbeb7db144
System UUID: b93f0c8a-58a1-485f-b563-70bbeb7db144
Boot ID: 1d0f1406-28d5-4844-8182-1d566a8abe0a
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.3
Kube-Proxy Version: v1.24.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-d46db594c-bccrv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system coredns-6d4b75cb6d-x864v 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 28m
kube-system etcd-multinode-20220728204317-10421 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kindnet-fp2hf 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 28m
kube-system kube-apiserver-multinode-20220728204317-10421 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-controller-manager-multinode-20220728204317-10421 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-proxy-jjnfs 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
kube-system kube-scheduler-multinode-20220728204317-10421 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 28m kube-proxy
Normal Starting 24m kube-proxy
Normal Starting 9m29s kube-proxy
Normal Starting 29m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 29m kubelet Node multinode-20220728204317-10421 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29m kubelet Node multinode-20220728204317-10421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29m kubelet Node multinode-20220728204317-10421 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 28m node-controller Node multinode-20220728204317-10421 event: Registered Node multinode-20220728204317-10421 in Controller
Normal NodeReady 28m kubelet Node multinode-20220728204317-10421 status is now: NodeReady
Normal NodeAllocatableEnforced 24m kubelet Updated Node Allocatable limit across pods
Normal Starting 24m kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 24m (x8 over 24m) kubelet Node multinode-20220728204317-10421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24m (x7 over 24m) kubelet Node multinode-20220728204317-10421 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 24m (x8 over 24m) kubelet Node multinode-20220728204317-10421 status is now: NodeHasSufficientMemory
Normal RegisteredNode 24m node-controller Node multinode-20220728204317-10421 event: Registered Node multinode-20220728204317-10421 in Controller
Normal Starting 9m42s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9m41s (x8 over 9m41s) kubelet Node multinode-20220728204317-10421 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m41s (x8 over 9m41s) kubelet Node multinode-20220728204317-10421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m41s (x7 over 9m41s) kubelet Node multinode-20220728204317-10421 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9m41s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 9m21s node-controller Node multinode-20220728204317-10421 event: Registered Node multinode-20220728204317-10421 in Controller
Name: multinode-20220728204317-10421-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-20220728204317-10421-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 28 Jul 2022 21:08:42 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-20220728204317-10421-m02
AcquireTime: <unset>
RenewTime: Thu, 28 Jul 2022 21:13:17 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 28 Jul 2022 21:09:02 +0000 Thu, 28 Jul 2022 21:08:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 28 Jul 2022 21:09:02 +0000 Thu, 28 Jul 2022 21:08:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 28 Jul 2022 21:09:02 +0000 Thu, 28 Jul 2022 21:08:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 28 Jul 2022 21:09:02 +0000 Thu, 28 Jul 2022 21:09:02 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.165
Hostname: multinode-20220728204317-10421-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165916Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165916Ki
pods: 110
System Info:
Machine ID: 02ab6f5e91454421bc47d212e4309279
System UUID: 02ab6f5e-9145-4421-bc47-d212e4309279
Boot ID: 34c1ebe1-d0ad-42f8-ab06-d4ee11690545
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.3
Kube-Proxy Version: v1.24.3
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-d46db594c-srx7j 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system kindnet-t85pp 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 27m
kube-system kube-proxy-gmlr4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 19m kube-proxy
Normal Starting 27m kube-proxy
Normal Starting 4m34s kube-proxy
Normal NodeHasNoDiskPressure 27m (x8 over 27m) kubelet Node multinode-20220728204317-10421-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 27m (x8 over 27m) kubelet Node multinode-20220728204317-10421-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 19m (x2 over 19m) kubelet Node multinode-20220728204317-10421-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 19m (x2 over 19m) kubelet Node multinode-20220728204317-10421-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 19m (x2 over 19m) kubelet Node multinode-20220728204317-10421-m02 status is now: NodeHasSufficientMemory
Normal Starting 19m kubelet Starting kubelet.
Normal NodeReady 19m kubelet Node multinode-20220728204317-10421-m02 status is now: NodeReady
Normal Starting 4m36s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m36s (x2 over 4m36s) kubelet Node multinode-20220728204317-10421-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m36s (x2 over 4m36s) kubelet Node multinode-20220728204317-10421-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m36s (x2 over 4m36s) kubelet Node multinode-20220728204317-10421-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m36s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m16s kubelet Node multinode-20220728204317-10421-m02 status is now: NodeReady
*
* ==> dmesg <==
* [Jul28 21:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.069484] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.883586] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.042925] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.140846] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.376106] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +5.881718] systemd-fstab-generator[515]: Ignoring "noauto" for root device
[ +0.099159] systemd-fstab-generator[526]: Ignoring "noauto" for root device
[ +1.049280] systemd-fstab-generator[744]: Ignoring "noauto" for root device
[ +0.281516] systemd-fstab-generator[798]: Ignoring "noauto" for root device
[ +0.108538] systemd-fstab-generator[809]: Ignoring "noauto" for root device
[ +0.103446] systemd-fstab-generator[820]: Ignoring "noauto" for root device
[ +1.623465] systemd-fstab-generator[991]: Ignoring "noauto" for root device
[ +0.094677] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
[ +5.061216] systemd-fstab-generator[1209]: Ignoring "noauto" for root device
[ +0.275016] kauditd_printk_skb: 67 callbacks suppressed
[ +11.997429] kauditd_printk_skb: 7 callbacks suppressed
*
* ==> etcd [52cc69f9f338] <==
* {"level":"info","ts":"2022-07-28T20:48:56.624Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d030e9334923ef1","local-member-id":"ac0ce77fb984259c","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-28T20:48:56.625Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-28T20:48:56.638Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-07-28T20:48:56.638Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.3:2380"}
{"level":"info","ts":"2022-07-28T20:48:56.639Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.3:2380"}
{"level":"info","ts":"2022-07-28T20:48:56.640Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-07-28T20:48:56.640Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ac0ce77fb984259c","initial-advertise-peer-urls":["https://192.168.39.3:2380"],"listen-peer-urls":["https://192.168.39.3:2380"],"advertise-client-urls":["https://192.168.39.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-07-28T20:48:57.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c is starting a new election at term 2"}
{"level":"info","ts":"2022-07-28T20:48:57.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became pre-candidate at term 2"}
{"level":"info","ts":"2022-07-28T20:48:57.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgPreVoteResp from ac0ce77fb984259c at term 2"}
{"level":"info","ts":"2022-07-28T20:48:57.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became candidate at term 3"}
{"level":"info","ts":"2022-07-28T20:48:57.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgVoteResp from ac0ce77fb984259c at term 3"}
{"level":"info","ts":"2022-07-28T20:48:57.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became leader at term 3"}
{"level":"info","ts":"2022-07-28T20:48:57.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ac0ce77fb984259c elected leader ac0ce77fb984259c at term 3"}
{"level":"info","ts":"2022-07-28T20:48:57.533Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ac0ce77fb984259c","local-member-attributes":"{Name:multinode-20220728204317-10421 ClientURLs:[https://192.168.39.3:2379]}","request-path":"/0/members/ac0ce77fb984259c/attributes","cluster-id":"1d030e9334923ef1","publish-timeout":"7s"}
{"level":"info","ts":"2022-07-28T20:48:57.533Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-28T20:48:57.535Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.3:2379"}
{"level":"info","ts":"2022-07-28T20:48:57.537Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-28T20:48:57.538Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-07-28T20:48:57.568Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-07-28T20:48:57.568Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-07-28T20:58:57.602Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1116}
{"level":"info","ts":"2022-07-28T20:58:57.628Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1116,"took":"23.095969ms"}
{"level":"info","ts":"2022-07-28T21:03:03.262Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-07-28T21:03:03.263Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-20220728204317-10421","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.3:2380"],"advertise-client-urls":["https://192.168.39.3:2379"]}
*
* ==> etcd [ff3e6233771c] <==
* {"level":"info","ts":"2022-07-28T21:03:40.803Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-28T21:03:40.804Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-07-28T21:03:40.805Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ac0ce77fb984259c","initial-advertise-peer-urls":["https://192.168.39.3:2380"],"listen-peer-urls":["https://192.168.39.3:2380"],"advertise-client-urls":["https://192.168.39.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-07-28T21:03:40.814Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-07-28T21:03:40.818Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.3:2380"}
{"level":"info","ts":"2022-07-28T21:03:40.818Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.3:2380"}
{"level":"info","ts":"2022-07-28T21:03:41.951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c is starting a new election at term 3"}
{"level":"info","ts":"2022-07-28T21:03:41.951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became pre-candidate at term 3"}
{"level":"info","ts":"2022-07-28T21:03:41.951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgPreVoteResp from ac0ce77fb984259c at term 3"}
{"level":"info","ts":"2022-07-28T21:03:41.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became candidate at term 4"}
{"level":"info","ts":"2022-07-28T21:03:41.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgVoteResp from ac0ce77fb984259c at term 4"}
{"level":"info","ts":"2022-07-28T21:03:41.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became leader at term 4"}
{"level":"info","ts":"2022-07-28T21:03:41.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ac0ce77fb984259c elected leader ac0ce77fb984259c at term 4"}
{"level":"info","ts":"2022-07-28T21:03:41.953Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ac0ce77fb984259c","local-member-attributes":"{Name:multinode-20220728204317-10421 ClientURLs:[https://192.168.39.3:2379]}","request-path":"/0/members/ac0ce77fb984259c/attributes","cluster-id":"1d030e9334923ef1","publish-timeout":"7s"}
{"level":"info","ts":"2022-07-28T21:03:41.953Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-28T21:03:41.955Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-28T21:03:41.958Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-07-28T21:03:41.959Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.3:2379"}
{"level":"info","ts":"2022-07-28T21:03:41.964Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-07-28T21:03:41.964Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-07-28T21:13:16.095Z","caller":"traceutil/trace.go:171","msg":"trace[114758008] linearizableReadLoop","detail":"{readStateIndex:2792; appliedIndex:2792; }","duration":"276.170325ms","start":"2022-07-28T21:13:15.819Z","end":"2022-07-28T21:13:16.095Z","steps":["trace[114758008] 'read index received' (duration: 276.157233ms)","trace[114758008] 'applied index is now lower than readState.Index' (duration: 5.934µs)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T21:13:16.108Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"289.493656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
{"level":"info","ts":"2022-07-28T21:13:16.109Z","caller":"traceutil/trace.go:171","msg":"trace[2143506571] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:2382; }","duration":"290.055293ms","start":"2022-07-28T21:13:15.819Z","end":"2022-07-28T21:13:16.109Z","steps":["trace[2143506571] 'agreement among raft nodes before linearized reading' (duration: 276.912453ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T21:13:16.109Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.804437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-07-28T21:13:16.110Z","caller":"traceutil/trace.go:171","msg":"trace[1415826103] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2382; }","duration":"122.155735ms","start":"2022-07-28T21:13:15.988Z","end":"2022-07-28T21:13:16.110Z","steps":["trace[1415826103] 'agreement among raft nodes before linearized reading' (duration: 107.807873ms)"],"step_count":1}
*
* ==> kernel <==
* 21:13:18 up 10 min, 0 users, load average: 0.32, 0.46, 0.27
Linux multinode-20220728204317-10421 5.10.57 #1 SMP Sat Jul 9 07:31:52 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [89e211959d81] <==
* I0728 21:03:44.482312 1 naming_controller.go:291] Starting NamingConditionController
I0728 21:03:44.482331 1 establishing_controller.go:76] Starting EstablishingController
I0728 21:03:44.482367 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0728 21:03:44.482393 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0728 21:03:44.482443 1 crd_finalizer.go:266] Starting CRDFinalizer
I0728 21:03:44.482844 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0728 21:03:44.489506 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
E0728 21:03:44.597589 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0728 21:03:44.616370 1 shared_informer.go:262] Caches are synced for node_authorizer
I0728 21:03:44.638320 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0728 21:03:44.651644 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0728 21:03:44.652806 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0728 21:03:44.659373 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0728 21:03:44.660291 1 cache.go:39] Caches are synced for autoregister controller
I0728 21:03:44.665583 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0728 21:03:44.684786 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0728 21:03:45.125620 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0728 21:03:45.459510 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0728 21:03:46.910867 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0728 21:03:47.082836 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0728 21:03:47.096467 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0728 21:03:47.168716 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0728 21:03:47.175263 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0728 21:03:48.989270 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0728 21:04:50.782378 1 controller.go:611] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [d410a7b73db1] <==
* W0728 21:03:03.276869 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.276890 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.276910 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.276930 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.276951 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.276970 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.276991 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277012 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277034 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277055 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277074 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277094 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277115 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277135 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277161 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277184 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.277209 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.279997 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.280212 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.280406 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.280522 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.280645 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0728 21:03:03.280657 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0728 21:03:03.333340 1 object_count_tracker.go:84] "StorageObjectCountTracker pruner is exiting"
I0728 21:03:03.333439 1 controller.go:198] Shutting down kubernetes service endpoint reconciler
*
* ==> kube-controller-manager [a836c5cf561c] <==
* I0728 21:03:57.835870 1 event.go:294] "Event occurred" object="kube-system/etcd-multinode-20220728204317-10421" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 21:03:57.838253 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-jjnfs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 21:03:57.842862 1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 21:03:57.850349 1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
I0728 21:03:57.850393 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-bccrv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-bccrv"
I0728 21:03:57.850402 1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d-x864v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-6d4b75cb6d-x864v"
I0728 21:03:57.855436 1 event.go:294] "Event occurred" object="kube-system/kindnet-fp2hf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 21:03:57.867497 1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-multinode-20220728204317-10421" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 21:03:57.871852 1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-multinode-20220728204317-10421" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 21:03:58.230327 1 shared_informer.go:262] Caches are synced for garbage collector
I0728 21:03:58.267169 1 shared_informer.go:262] Caches are synced for garbage collector
I0728 21:03:58.267217 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W0728 21:04:05.235881 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728204317-10421-m02 node
I0728 21:04:37.792042 1 event.go:294] "Event occurred" object="multinode-20220728204317-10421-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220728204317-10421-m02 status is now: NodeNotReady"
I0728 21:04:37.806451 1 event.go:294] "Event occurred" object="kube-system/kindnet-t85pp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 21:04:37.817451 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-gmlr4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 21:04:37.827799 1 gc_controller.go:81] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-fzwzx"
I0728 21:04:37.846444 1 gc_controller.go:239] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-fzwzx"
I0728 21:04:37.846556 1 gc_controller.go:81] "PodGC is force deleting Pod" pod="kube-system/kindnet-4h7fj"
I0728 21:04:37.873666 1 gc_controller.go:239] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-4h7fj"
W0728 21:08:42.180156 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220728204317-10421-m02" does not exist
I0728 21:08:42.181795 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-srx7j" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-srx7j"
I0728 21:08:42.193294 1 range_allocator.go:374] Set node multinode-20220728204317-10421-m02 PodCIDR to [10.244.1.0/24]
W0728 21:09:02.381234 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728204317-10421-m02 node
I0728 21:09:02.869898 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-srx7j" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-srx7j"
*
* ==> kube-controller-manager [f9ccc91657ad] <==
* I0728 20:49:23.515358 1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
I0728 20:49:23.515773 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-bccrv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-bccrv"
I0728 20:49:23.516014 1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d-x864v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-6d4b75cb6d-x864v"
I0728 20:49:53.533523 1 event.go:294] "Event occurred" object="multinode-20220728204317-10421-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220728204317-10421-m02 status is now: NodeNotReady"
W0728 20:49:53.533708 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728204317-10421-m03 node
I0728 20:49:53.543836 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-gmlr4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 20:49:53.552893 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-srx7j" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 20:49:53.564672 1 event.go:294] "Event occurred" object="kube-system/kindnet-t85pp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 20:49:53.576117 1 event.go:294] "Event occurred" object="multinode-20220728204317-10421-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220728204317-10421-m03 status is now: NodeNotReady"
I0728 20:49:53.585680 1 event.go:294] "Event occurred" object="kube-system/kindnet-4h7fj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 20:49:53.597919 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-fzwzx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0728 20:53:56.756813 1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-mp7l7"
W0728 20:54:00.616764 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220728204317-10421-m02" does not exist
I0728 20:54:00.622017 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-srx7j" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-srx7j"
I0728 20:54:00.628094 1 range_allocator.go:374] Set node multinode-20220728204317-10421-m02 PodCIDR to [10.244.1.0/24]
W0728 20:54:10.919833 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728204317-10421-m02 node
I0728 20:54:13.659578 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-srx7j" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-srx7j"
W0728 20:58:36.326709 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728204317-10421-m02 node
W0728 20:58:37.207052 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728204317-10421-m02 node
W0728 20:58:37.208174 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220728204317-10421-m03" does not exist
I0728 20:58:37.224756 1 range_allocator.go:374] Set node multinode-20220728204317-10421-m03 PodCIDR to [10.244.2.0/24]
W0728 20:58:57.370622 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728204317-10421-m03 node
I0728 20:58:58.715878 1 event.go:294] "Event occurred" object="default/busybox-d46db594c-mp7l7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-d46db594c-mp7l7"
I0728 21:02:59.926590 1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-5b74p"
W0728 21:03:01.949656 1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728204317-10421-m02 node
*
* ==> kube-proxy [ba8c9569e254] <==
* I0728 21:03:48.874070 1 node.go:163] Successfully retrieved node IP: 192.168.39.3
I0728 21:03:48.874390 1 server_others.go:138] "Detected node IP" address="192.168.39.3"
I0728 21:03:48.874648 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0728 21:03:48.938851 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0728 21:03:48.938896 1 server_others.go:206] "Using iptables Proxier"
I0728 21:03:48.939366 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0728 21:03:48.940701 1 server.go:661] "Version info" version="v1.24.3"
I0728 21:03:48.940807 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0728 21:03:48.944384 1 config.go:317] "Starting service config controller"
I0728 21:03:48.945292 1 config.go:444] "Starting node config controller"
I0728 21:03:48.945339 1 shared_informer.go:255] Waiting for caches to sync for node config
I0728 21:03:48.945367 1 config.go:226] "Starting endpoint slice config controller"
I0728 21:03:48.945371 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0728 21:03:48.945876 1 shared_informer.go:255] Waiting for caches to sync for service config
I0728 21:03:49.046247 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0728 21:03:49.046284 1 shared_informer.go:262] Caches are synced for node config
I0728 21:03:49.047435 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-proxy [fa9f8dcb786b] <==
* I0728 20:49:03.018119 1 node.go:163] Successfully retrieved node IP: 192.168.39.3
I0728 20:49:03.018185 1 server_others.go:138] "Detected node IP" address="192.168.39.3"
I0728 20:49:03.018278 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0728 20:49:03.109021 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0728 20:49:03.109118 1 server_others.go:206] "Using iptables Proxier"
I0728 20:49:03.109220 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0728 20:49:03.110338 1 server.go:661] "Version info" version="v1.24.3"
I0728 20:49:03.110354 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0728 20:49:03.111948 1 config.go:317] "Starting service config controller"
I0728 20:49:03.111960 1 shared_informer.go:255] Waiting for caches to sync for service config
I0728 20:49:03.111975 1 config.go:226] "Starting endpoint slice config controller"
I0728 20:49:03.111981 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0728 20:49:03.114771 1 config.go:444] "Starting node config controller"
I0728 20:49:03.114781 1 shared_informer.go:255] Waiting for caches to sync for node config
I0728 20:49:03.212870 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0728 20:49:03.212929 1 shared_informer.go:262] Caches are synced for service config
I0728 20:49:03.214877 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [2bc6fa43faeb] <==
* I0728 20:48:57.685452 1 serving.go:348] Generated self-signed cert in-memory
W0728 20:49:00.377103 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0728 20:49:00.377772 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0728 20:49:00.378161 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0728 20:49:00.378467 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0728 20:49:00.436406 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
I0728 20:49:00.436451 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0728 20:49:00.439550 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0728 20:49:00.439681 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0728 20:49:00.448485 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0728 20:49:00.441848 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0728 20:49:00.549534 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0728 21:03:03.240496 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0728 21:03:03.240855 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0728 21:03:03.241784 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
*
* ==> kube-scheduler [86f2d7637e3d] <==
* I0728 21:03:41.360708 1 serving.go:348] Generated self-signed cert in-memory
W0728 21:03:44.528444 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0728 21:03:44.529308 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0728 21:03:44.529523 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0728 21:03:44.529659 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0728 21:03:44.613300 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
I0728 21:03:44.613344 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0728 21:03:44.618222 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0728 21:03:44.619026 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0728 21:03:44.619087 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0728 21:03:44.619176 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0728 21:03:44.720071 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Thu 2022-07-28 21:03:20 UTC, ends at Thu 2022-07-28 21:13:19 UTC. --
Jul 28 21:12:35 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:35.148934 1215 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86621407-cf23-4cf5-bc96-76ac4b8e9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-x864v_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-6d4b75cb6d-x864v" podUID=86621407-cf23-4cf5-bc96-76ac4b8e9d5e
Jul 28 21:12:49 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:49.147317 1215 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-d46db594c-bccrv_default\" network: could not retrieve port mappings: key is not found" podSandboxID="92ccbb14f82ea76a658a97181924fc0e88e724061cd218da7e8c630a194fb840"
Jul 28 21:12:49 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:49.147377 1215 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:92ccbb14f82ea76a658a97181924fc0e88e724061cd218da7e8c630a194fb840}
Jul 28 21:12:49 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:49.147405 1215 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8906b7f-acaa-40b1-bb17-0ec45ee13fe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-bccrv_default\\\" network: could not retrieve port mappings: key is not found\""
Jul 28 21:12:49 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:49.147428 1215 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8906b7f-acaa-40b1-bb17-0ec45ee13fe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-bccrv_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-d46db594c-bccrv" podUID=f8906b7f-acaa-40b1-bb17-0ec45ee13fe7
Jul 28 21:12:50 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:50.148441 1215 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-6d4b75cb6d-x864v_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="761ea68543558ca336219fc051f5118b74c7fb2b3ff847e64bc37fe3b45671b2"
Jul 28 21:12:50 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:50.148650 1215 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:761ea68543558ca336219fc051f5118b74c7fb2b3ff847e64bc37fe3b45671b2}
Jul 28 21:12:50 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:50.148836 1215 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86621407-cf23-4cf5-bc96-76ac4b8e9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-x864v_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Jul 28 21:12:50 multinode-20220728204317-10421 kubelet[1215]: E0728 21:12:50.148863 1215 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86621407-cf23-4cf5-bc96-76ac4b8e9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-x864v_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-6d4b75cb6d-x864v" podUID=86621407-cf23-4cf5-bc96-76ac4b8e9d5e
Jul 28 21:13:00 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:00.148104 1215 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-d46db594c-bccrv_default\" network: could not retrieve port mappings: key is not found" podSandboxID="92ccbb14f82ea76a658a97181924fc0e88e724061cd218da7e8c630a194fb840"
Jul 28 21:13:00 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:00.148461 1215 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:92ccbb14f82ea76a658a97181924fc0e88e724061cd218da7e8c630a194fb840}
Jul 28 21:13:00 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:00.148526 1215 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8906b7f-acaa-40b1-bb17-0ec45ee13fe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-bccrv_default\\\" network: could not retrieve port mappings: key is not found\""
Jul 28 21:13:00 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:00.148588 1215 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8906b7f-acaa-40b1-bb17-0ec45ee13fe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-bccrv_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-d46db594c-bccrv" podUID=f8906b7f-acaa-40b1-bb17-0ec45ee13fe7
Jul 28 21:13:03 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:03.150047 1215 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-6d4b75cb6d-x864v_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="761ea68543558ca336219fc051f5118b74c7fb2b3ff847e64bc37fe3b45671b2"
Jul 28 21:13:03 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:03.150111 1215 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:761ea68543558ca336219fc051f5118b74c7fb2b3ff847e64bc37fe3b45671b2}
Jul 28 21:13:03 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:03.150139 1215 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86621407-cf23-4cf5-bc96-76ac4b8e9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-x864v_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Jul 28 21:13:03 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:03.150160 1215 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86621407-cf23-4cf5-bc96-76ac4b8e9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-x864v_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-6d4b75cb6d-x864v" podUID=86621407-cf23-4cf5-bc96-76ac4b8e9d5e
Jul 28 21:13:14 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:14.150458 1215 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-6d4b75cb6d-x864v_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="761ea68543558ca336219fc051f5118b74c7fb2b3ff847e64bc37fe3b45671b2"
Jul 28 21:13:14 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:14.150549 1215 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:761ea68543558ca336219fc051f5118b74c7fb2b3ff847e64bc37fe3b45671b2}
Jul 28 21:13:14 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:14.150598 1215 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86621407-cf23-4cf5-bc96-76ac4b8e9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-x864v_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Jul 28 21:13:14 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:14.150632 1215 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86621407-cf23-4cf5-bc96-76ac4b8e9d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-x864v_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-6d4b75cb6d-x864v" podUID=86621407-cf23-4cf5-bc96-76ac4b8e9d5e
Jul 28 21:13:15 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:15.150182 1215 remote_runtime.go:248] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-d46db594c-bccrv_default\" network: could not retrieve port mappings: key is not found" podSandboxID="92ccbb14f82ea76a658a97181924fc0e88e724061cd218da7e8c630a194fb840"
Jul 28 21:13:15 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:15.150293 1215 kuberuntime_manager.go:999] "Failed to stop sandbox" podSandboxID={Type:docker ID:92ccbb14f82ea76a658a97181924fc0e88e724061cd218da7e8c630a194fb840}
Jul 28 21:13:15 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:15.150350 1215 kuberuntime_manager.go:738] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8906b7f-acaa-40b1-bb17-0ec45ee13fe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-bccrv_default\\\" network: could not retrieve port mappings: key is not found\""
Jul 28 21:13:15 multinode-20220728204317-10421 kubelet[1215]: E0728 21:13:15.150401 1215 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8906b7f-acaa-40b1-bb17-0ec45ee13fe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-d46db594c-bccrv_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-d46db594c-bccrv" podUID=f8906b7f-acaa-40b1-bb17-0ec45ee13fe7
*
* ==> storage-provisioner [54e86224ae8d] <==
* I0728 21:04:33.315775 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0728 21:04:33.339830 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0728 21:04:33.340605 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0728 21:04:50.785943 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0728 21:04:50.787094 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220728204317-10421_d2102dff-6968-43ac-ae1b-c42bd8bbafb1!
I0728 21:04:50.789105 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"007ddb14-41e3-4fad-a2cc-99a2aac62e6d", APIVersion:"v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220728204317-10421_d2102dff-6968-43ac-ae1b-c42bd8bbafb1 became leader
I0728 21:04:50.887579 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220728204317-10421_d2102dff-6968-43ac-ae1b-c42bd8bbafb1!
*
* ==> storage-provisioner [c38b38cb6c4b] <==
* I0728 21:03:48.847059 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0728 21:04:18.852928 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20220728204317-10421 -n multinode-20220728204317-10421
helpers_test.go:261: (dbg) Run: kubectl --context multinode-20220728204317-10421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-d46db594c-5b74p
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context multinode-20220728204317-10421 describe pod busybox-d46db594c-5b74p
helpers_test.go:280: (dbg) kubectl --context multinode-20220728204317-10421 describe pod busybox-d46db594c-5b74p:
-- stdout --
Name: busybox-d46db594c-5b74p
Namespace: default
Priority: 0
Node: <none>
Labels: app=busybox
pod-template-hash=d46db594c
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/busybox-d46db594c
Containers:
busybox:
Image: gcr.io/k8s-minikube/busybox:1.28
Port: <none>
Host Port: <none>
Command:
sleep
3600
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c8r24 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-c8r24:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 8m39s (x2 over 8m42s) default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 4m17s (x2 over 9m35s) default-scheduler 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (15.10s)