=== RUN TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-190825
multinode_test.go:450: (dbg) Run: out/minikube-linux-amd64 start -p multinode-190825-m02 --driver=kvm2
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-190825-m02 --driver=kvm2 : exit status 14 (93.456975ms)
-- stdout --
* [multinode-190825-m02] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15232
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15232-712341/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-712341/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
-- /stdout --
** stderr **
! Profile name 'multinode-190825-m02' is duplicated with machine name 'multinode-190825-m02' in profile 'multinode-190825'
X Exiting due to MK_USAGE: Profile name should be unique
** /stderr **
multinode_test.go:458: (dbg) Run: out/minikube-linux-amd64 start -p multinode-190825-m03 --driver=kvm2
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-190825-m03 --driver=kvm2 : signal: killed (27.040652485s)
-- stdout --
* [multinode-190825-m03] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15232
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15232-712341/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-712341/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the kvm2 driver based on user configuration
* Starting control plane node multinode-190825-m03 in cluster multinode-190825-m03
* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
-- /stdout --
multinode_test.go:460: failed to start profile. args "out/minikube-linux-amd64 start -p multinode-190825-m03 --driver=kvm2 " : signal: killed
multinode_test.go:465: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-190825
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-190825: context deadline exceeded (1.689µs)
multinode_test.go:470: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-190825-m03
multinode_test.go:470: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p multinode-190825-m03: context deadline exceeded (211ns)
multinode_test.go:472: failed to clean temporary profile. args "out/minikube-linux-amd64 delete -p multinode-190825-m03" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-190825 -n multinode-190825
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-190825 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-190825 logs -n 25: (1.551415239s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| cp | multinode-190825 cp multinode-190825-m02:/home/docker/cp-test.txt | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825-m03:/home/docker/cp-test_multinode-190825-m02_multinode-190825-m03.txt | | | | | |
| ssh | multinode-190825 ssh -n | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-190825 ssh -n multinode-190825-m03 sudo cat | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | /home/docker/cp-test_multinode-190825-m02_multinode-190825-m03.txt | | | | | |
| cp | multinode-190825 cp testdata/cp-test.txt | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-190825 ssh -n | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-190825 cp multinode-190825-m03:/home/docker/cp-test.txt | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | /tmp/TestMultiNodeserialCopyFile159420044/001/cp-test_multinode-190825-m03.txt | | | | | |
| ssh | multinode-190825 ssh -n | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-190825 cp multinode-190825-m03:/home/docker/cp-test.txt | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825:/home/docker/cp-test_multinode-190825-m03_multinode-190825.txt | | | | | |
| ssh | multinode-190825 ssh -n | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-190825 ssh -n multinode-190825 sudo cat | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | /home/docker/cp-test_multinode-190825-m03_multinode-190825.txt | | | | | |
| cp | multinode-190825 cp multinode-190825-m03:/home/docker/cp-test.txt | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825-m02:/home/docker/cp-test_multinode-190825-m03_multinode-190825-m02.txt | | | | | |
| ssh | multinode-190825 ssh -n | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | multinode-190825-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-190825 ssh -n multinode-190825-m02 sudo cat | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | /home/docker/cp-test_multinode-190825-m03_multinode-190825-m02.txt | | | | | |
| node | multinode-190825 node stop m03 | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| node | multinode-190825 node start | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:12 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-190825 | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | |
| stop | -p multinode-190825 | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:12 UTC | 31 Oct 22 19:13 UTC |
| start | -p multinode-190825 | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:13 UTC | 31 Oct 22 19:27 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-190825 | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:27 UTC | |
| node | multinode-190825 node delete | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:27 UTC | 31 Oct 22 19:27 UTC |
| | m03 | | | | | |
| stop | multinode-190825 stop | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:27 UTC | 31 Oct 22 19:28 UTC |
| start | -p multinode-190825 | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:28 UTC | 31 Oct 22 19:37 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| node | list -p multinode-190825 | multinode-190825 | jenkins | v1.27.1 | 31 Oct 22 19:37 UTC | |
| start | -p multinode-190825-m02 | multinode-190825-m02 | jenkins | v1.27.1 | 31 Oct 22 19:37 UTC | |
| | --driver=kvm2 | | | | | |
| start | -p multinode-190825-m03 | multinode-190825-m03 | jenkins | v1.27.1 | 31 Oct 22 19:37 UTC | |
| | --driver=kvm2 | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/10/31 19:37:58
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1031 19:37:58.628016 745229 out.go:296] Setting OutFile to fd 1 ...
I1031 19:37:58.628146 745229 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 19:37:58.628149 745229 out.go:309] Setting ErrFile to fd 2...
I1031 19:37:58.628153 745229 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 19:37:58.628280 745229 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-712341/.minikube/bin
I1031 19:37:58.628840 745229 out.go:303] Setting JSON to false
I1031 19:37:58.629729 745229 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12032,"bootTime":1667233047,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1031 19:37:58.629791 745229 start.go:126] virtualization: kvm guest
I1031 19:37:58.632377 745229 out.go:177] * [multinode-190825-m03] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1031 19:37:58.634561 745229 out.go:177] - MINIKUBE_LOCATION=15232
I1031 19:37:58.634534 745229 notify.go:220] Checking for updates...
I1031 19:37:58.636075 745229 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1031 19:37:58.637860 745229 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-712341/kubeconfig
I1031 19:37:58.639470 745229 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-712341/.minikube
I1031 19:37:58.641125 745229 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1031 19:37:58.643081 745229 config.go:180] Loaded profile config "multinode-190825": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 19:37:58.643157 745229 driver.go:365] Setting default libvirt URI to qemu:///system
I1031 19:37:58.679882 745229 out.go:177] * Using the kvm2 driver based on user configuration
I1031 19:37:58.681402 745229 start.go:282] selected driver: kvm2
I1031 19:37:58.681418 745229 start.go:808] validating driver "kvm2" against <nil>
I1031 19:37:58.681438 745229 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1031 19:37:58.681915 745229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 19:37:58.682133 745229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15232-712341/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1031 19:37:58.699063 745229 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.27.1
I1031 19:37:58.699138 745229 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1031 19:37:58.699606 745229 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
I1031 19:37:58.699716 745229 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
I1031 19:37:58.699743 745229 cni.go:95] Creating CNI manager for ""
I1031 19:37:58.699752 745229 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1031 19:37:58.699762 745229 start_flags.go:317] config:
{Name:multinode-190825-m03 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-190825-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 19:37:58.699847 745229 iso.go:124] acquiring lock: {Name:mk011a8ff7c04fa26a90a5ce0e723eda7b65e079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 19:37:58.701906 745229 out.go:177] * Starting control plane node multinode-190825-m03 in cluster multinode-190825-m03
I1031 19:37:58.703265 745229 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1031 19:37:58.703295 745229 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-712341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I1031 19:37:58.703307 745229 cache.go:57] Caching tarball of preloaded images
I1031 19:37:58.703424 745229 preload.go:174] Found /home/jenkins/minikube-integration/15232-712341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1031 19:37:58.703438 745229 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I1031 19:37:58.703526 745229 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-712341/.minikube/profiles/multinode-190825-m03/config.json ...
I1031 19:37:58.703539 745229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-712341/.minikube/profiles/multinode-190825-m03/config.json: {Name:mk38a54fa1924dc1355e24cbdb92f0478f54b605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:37:58.703670 745229 cache.go:208] Successfully downloaded all kic artifacts
I1031 19:37:58.703686 745229 start.go:364] acquiring machines lock for multinode-190825-m03: {Name:mk36e2f6a86e4081a558c406962a31c4fa75c732 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1031 19:37:58.703720 745229 start.go:368] acquired machines lock for "multinode-190825-m03" in 26.173µs
I1031 19:37:58.703729 745229 start.go:93] Provisioning new machine with config: &{Name:multinode-190825-m03 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-190825-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1031 19:37:58.703785 745229 start.go:125] createHost starting for "" (driver="kvm2")
I1031 19:37:58.706256 745229 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
I1031 19:37:58.706401 745229 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 19:37:58.706434 745229 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 19:37:58.721578 745229 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45831
I1031 19:37:58.722037 745229 main.go:134] libmachine: () Calling .GetVersion
I1031 19:37:58.722569 745229 main.go:134] libmachine: Using API Version 1
I1031 19:37:58.722585 745229 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 19:37:58.722927 745229 main.go:134] libmachine: () Calling .GetMachineName
I1031 19:37:58.723085 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetMachineName
I1031 19:37:58.723215 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:37:58.723350 745229 start.go:159] libmachine.API.Create for "multinode-190825-m03" (driver="kvm2")
I1031 19:37:58.723370 745229 client.go:168] LocalClient.Create starting
I1031 19:37:58.723395 745229 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-712341/.minikube/certs/ca.pem
I1031 19:37:58.723419 745229 main.go:134] libmachine: Decoding PEM data...
I1031 19:37:58.723432 745229 main.go:134] libmachine: Parsing certificate...
I1031 19:37:58.723495 745229 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-712341/.minikube/certs/cert.pem
I1031 19:37:58.723509 745229 main.go:134] libmachine: Decoding PEM data...
I1031 19:37:58.723519 745229 main.go:134] libmachine: Parsing certificate...
I1031 19:37:58.723532 745229 main.go:134] libmachine: Running pre-create checks...
I1031 19:37:58.723539 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .PreCreateCheck
I1031 19:37:58.723855 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetConfigRaw
I1031 19:37:58.724235 745229 main.go:134] libmachine: Creating machine...
I1031 19:37:58.724242 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .Create
I1031 19:37:58.724474 745229 main.go:134] libmachine: (multinode-190825-m03) Creating KVM machine...
I1031 19:37:58.725809 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found existing default KVM network
I1031 19:37:58.726747 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:37:58.726619 745252 network.go:246] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:2f:6a}}
I1031 19:37:58.727442 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:37:58.727326 745252 network.go:295] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc0002145d8] misses:0}
I1031 19:37:58.727465 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:37:58.727381 745252 network.go:241] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1031 19:37:58.733539 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | trying to create private KVM network mk-multinode-190825-m03 192.168.50.0/24...
I1031 19:37:58.809120 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | private KVM network mk-multinode-190825-m03 192.168.50.0/24 created
I1031 19:37:58.809146 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:37:58.809101 745252 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15232-712341/.minikube
I1031 19:37:58.809177 745229 main.go:134] libmachine: (multinode-190825-m03) Setting up store path in /home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03 ...
I1031 19:37:58.809205 745229 main.go:134] libmachine: (multinode-190825-m03) Building disk image from file:///home/jenkins/minikube-integration/15232-712341/.minikube/cache/iso/amd64/minikube-v1.27.0-1666976405-15232-amd64.iso
I1031 19:37:58.809292 745229 main.go:134] libmachine: (multinode-190825-m03) Downloading /home/jenkins/minikube-integration/15232-712341/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15232-712341/.minikube/cache/iso/amd64/minikube-v1.27.0-1666976405-15232-amd64.iso...
I1031 19:37:59.034733 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:37:59.034591 745252 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/id_rsa...
I1031 19:37:59.161258 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:37:59.161120 745252 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/multinode-190825-m03.rawdisk...
I1031 19:37:59.161286 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Writing magic tar header
I1031 19:37:59.161303 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Writing SSH key tar header
I1031 19:37:59.161316 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:37:59.161229 745252 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03 ...
I1031 19:37:59.161363 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03
I1031 19:37:59.161384 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15232-712341/.minikube/machines
I1031 19:37:59.161395 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15232-712341/.minikube
I1031 19:37:59.161408 745229 main.go:134] libmachine: (multinode-190825-m03) Setting executable bit set on /home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03 (perms=drwx------)
I1031 19:37:59.161421 745229 main.go:134] libmachine: (multinode-190825-m03) Setting executable bit set on /home/jenkins/minikube-integration/15232-712341/.minikube/machines (perms=drwxrwxr-x)
I1031 19:37:59.161428 745229 main.go:134] libmachine: (multinode-190825-m03) Setting executable bit set on /home/jenkins/minikube-integration/15232-712341/.minikube (perms=drwxr-xr-x)
I1031 19:37:59.161437 745229 main.go:134] libmachine: (multinode-190825-m03) Setting executable bit set on /home/jenkins/minikube-integration/15232-712341 (perms=drwxrwxr-x)
I1031 19:37:59.161446 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15232-712341
I1031 19:37:59.161456 745229 main.go:134] libmachine: (multinode-190825-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1031 19:37:59.161481 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I1031 19:37:59.161493 745229 main.go:134] libmachine: (multinode-190825-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1031 19:37:59.161502 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Checking permissions on dir: /home/jenkins
I1031 19:37:59.161510 745229 main.go:134] libmachine: (multinode-190825-m03) Creating domain...
I1031 19:37:59.161526 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Checking permissions on dir: /home
I1031 19:37:59.161531 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Skipping /home - not owner
I1031 19:37:59.162623 745229 main.go:134] libmachine: (multinode-190825-m03) define libvirt domain using xml:
I1031 19:37:59.162651 745229 main.go:134] libmachine: (multinode-190825-m03) <domain type='kvm'>
I1031 19:37:59.162662 745229 main.go:134] libmachine: (multinode-190825-m03) <name>multinode-190825-m03</name>
I1031 19:37:59.162676 745229 main.go:134] libmachine: (multinode-190825-m03) <memory unit='MiB'>6000</memory>
I1031 19:37:59.162685 745229 main.go:134] libmachine: (multinode-190825-m03) <vcpu>2</vcpu>
I1031 19:37:59.162692 745229 main.go:134] libmachine: (multinode-190825-m03) <features>
I1031 19:37:59.162701 745229 main.go:134] libmachine: (multinode-190825-m03) <acpi/>
I1031 19:37:59.162708 745229 main.go:134] libmachine: (multinode-190825-m03) <apic/>
I1031 19:37:59.162713 745229 main.go:134] libmachine: (multinode-190825-m03) <pae/>
I1031 19:37:59.162718 745229 main.go:134] libmachine: (multinode-190825-m03)
I1031 19:37:59.162723 745229 main.go:134] libmachine: (multinode-190825-m03) </features>
I1031 19:37:59.162727 745229 main.go:134] libmachine: (multinode-190825-m03) <cpu mode='host-passthrough'>
I1031 19:37:59.162732 745229 main.go:134] libmachine: (multinode-190825-m03)
I1031 19:37:59.162751 745229 main.go:134] libmachine: (multinode-190825-m03) </cpu>
I1031 19:37:59.162758 745229 main.go:134] libmachine: (multinode-190825-m03) <os>
I1031 19:37:59.162766 745229 main.go:134] libmachine: (multinode-190825-m03) <type>hvm</type>
I1031 19:37:59.162780 745229 main.go:134] libmachine: (multinode-190825-m03) <boot dev='cdrom'/>
I1031 19:37:59.162789 745229 main.go:134] libmachine: (multinode-190825-m03) <boot dev='hd'/>
I1031 19:37:59.162796 745229 main.go:134] libmachine: (multinode-190825-m03) <bootmenu enable='no'/>
I1031 19:37:59.162806 745229 main.go:134] libmachine: (multinode-190825-m03) </os>
I1031 19:37:59.162813 745229 main.go:134] libmachine: (multinode-190825-m03) <devices>
I1031 19:37:59.162824 745229 main.go:134] libmachine: (multinode-190825-m03) <disk type='file' device='cdrom'>
I1031 19:37:59.162836 745229 main.go:134] libmachine: (multinode-190825-m03) <source file='/home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/boot2docker.iso'/>
I1031 19:37:59.162845 745229 main.go:134] libmachine: (multinode-190825-m03) <target dev='hdc' bus='scsi'/>
I1031 19:37:59.162859 745229 main.go:134] libmachine: (multinode-190825-m03) <readonly/>
I1031 19:37:59.162870 745229 main.go:134] libmachine: (multinode-190825-m03) </disk>
I1031 19:37:59.162879 745229 main.go:134] libmachine: (multinode-190825-m03) <disk type='file' device='disk'>
I1031 19:37:59.162890 745229 main.go:134] libmachine: (multinode-190825-m03) <driver name='qemu' type='raw' cache='default' io='threads' />
I1031 19:37:59.162903 745229 main.go:134] libmachine: (multinode-190825-m03) <source file='/home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/multinode-190825-m03.rawdisk'/>
I1031 19:37:59.162912 745229 main.go:134] libmachine: (multinode-190825-m03) <target dev='hda' bus='virtio'/>
I1031 19:37:59.162919 745229 main.go:134] libmachine: (multinode-190825-m03) </disk>
I1031 19:37:59.162938 745229 main.go:134] libmachine: (multinode-190825-m03) <interface type='network'>
I1031 19:37:59.162950 745229 main.go:134] libmachine: (multinode-190825-m03) <source network='mk-multinode-190825-m03'/>
I1031 19:37:59.162955 745229 main.go:134] libmachine: (multinode-190825-m03) <model type='virtio'/>
I1031 19:37:59.162959 745229 main.go:134] libmachine: (multinode-190825-m03) </interface>
I1031 19:37:59.162965 745229 main.go:134] libmachine: (multinode-190825-m03) <interface type='network'>
I1031 19:37:59.162969 745229 main.go:134] libmachine: (multinode-190825-m03) <source network='default'/>
I1031 19:37:59.162974 745229 main.go:134] libmachine: (multinode-190825-m03) <model type='virtio'/>
I1031 19:37:59.162978 745229 main.go:134] libmachine: (multinode-190825-m03) </interface>
I1031 19:37:59.162990 745229 main.go:134] libmachine: (multinode-190825-m03) <serial type='pty'>
I1031 19:37:59.162994 745229 main.go:134] libmachine: (multinode-190825-m03) <target port='0'/>
I1031 19:37:59.163001 745229 main.go:134] libmachine: (multinode-190825-m03) </serial>
I1031 19:37:59.163006 745229 main.go:134] libmachine: (multinode-190825-m03) <console type='pty'>
I1031 19:37:59.163011 745229 main.go:134] libmachine: (multinode-190825-m03) <target type='serial' port='0'/>
I1031 19:37:59.163019 745229 main.go:134] libmachine: (multinode-190825-m03) </console>
I1031 19:37:59.163025 745229 main.go:134] libmachine: (multinode-190825-m03) <rng model='virtio'>
I1031 19:37:59.163030 745229 main.go:134] libmachine: (multinode-190825-m03) <backend model='random'>/dev/random</backend>
I1031 19:37:59.163035 745229 main.go:134] libmachine: (multinode-190825-m03) </rng>
I1031 19:37:59.163039 745229 main.go:134] libmachine: (multinode-190825-m03)
I1031 19:37:59.163043 745229 main.go:134] libmachine: (multinode-190825-m03)
I1031 19:37:59.163047 745229 main.go:134] libmachine: (multinode-190825-m03) </devices>
I1031 19:37:59.163052 745229 main.go:134] libmachine: (multinode-190825-m03) </domain>
I1031 19:37:59.163056 745229 main.go:134] libmachine: (multinode-190825-m03)
I1031 19:37:59.167766 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:03:6c:12 in network default
I1031 19:37:59.168420 745229 main.go:134] libmachine: (multinode-190825-m03) Ensuring networks are active...
I1031 19:37:59.168440 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:37:59.169206 745229 main.go:134] libmachine: (multinode-190825-m03) Ensuring network default is active
I1031 19:37:59.169649 745229 main.go:134] libmachine: (multinode-190825-m03) Ensuring network mk-multinode-190825-m03 is active
I1031 19:37:59.170387 745229 main.go:134] libmachine: (multinode-190825-m03) Getting domain xml...
I1031 19:37:59.171146 745229 main.go:134] libmachine: (multinode-190825-m03) Creating domain...
I1031 19:38:00.444321 745229 main.go:134] libmachine: (multinode-190825-m03) Waiting to get IP...
I1031 19:38:00.445134 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:00.445529 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:00.445567 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:00.445503 745252 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
I1031 19:38:00.710087 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:00.710610 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:00.710637 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:00.710542 745252 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
I1031 19:38:01.093103 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:01.093529 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:01.093556 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:01.093474 745252 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
I1031 19:38:01.518024 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:01.518538 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:01.518569 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:01.518497 745252 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
I1031 19:38:01.993077 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:01.993631 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:01.993657 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:01.993564 745252 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
I1031 19:38:02.582399 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:02.582842 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:02.582863 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:02.582754 745252 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
I1031 19:38:03.418676 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:03.419107 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:03.419125 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:03.419066 745252 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
I1031 19:38:04.167682 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:04.168075 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:04.168099 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:04.168019 745252 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
I1031 19:38:05.157247 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:05.157716 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:05.157743 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:05.157661 745252 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
I1031 19:38:06.349344 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:06.349756 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:06.349775 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:06.349702 745252 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
I1031 19:38:08.029538 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:08.030064 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:08.030092 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:08.030006 745252 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
I1031 19:38:10.378618 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:10.379021 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:10.379068 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:10.378976 745252 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
I1031 19:38:13.747427 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:13.747846 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:13.747865 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:13.747789 745252 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
I1031 19:38:16.870308 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:16.870686 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find current IP address of domain multinode-190825-m03 in network mk-multinode-190825-m03
I1031 19:38:16.870707 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | I1031 19:38:16.870644 745252 retry.go:31] will retry after 4.276119362s: waiting for machine to come up
I1031 19:38:21.149428 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.149980 745229 main.go:134] libmachine: (multinode-190825-m03) Found IP for machine: 192.168.50.254
I1031 19:38:21.150008 745229 main.go:134] libmachine: (multinode-190825-m03) Reserving static IP address...
I1031 19:38:21.150018 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has current primary IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.150418 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | unable to find host DHCP lease matching {name: "multinode-190825-m03", mac: "52:54:00:e7:99:19", ip: "192.168.50.254"} in network mk-multinode-190825-m03
I1031 19:38:21.229181 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Getting to WaitForSSH function...
I1031 19:38:21.229209 745229 main.go:134] libmachine: (multinode-190825-m03) Reserved static IP address: 192.168.50.254
I1031 19:38:21.229217 745229 main.go:134] libmachine: (multinode-190825-m03) Waiting for SSH to be available...
I1031 19:38:21.232422 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.232834 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:99:19}
I1031 19:38:21.232865 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.232979 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Using SSH client type: external
I1031 19:38:21.233007 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/id_rsa (-rw-------)
I1031 19:38:21.233050 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I1031 19:38:21.233067 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | About to run SSH command:
I1031 19:38:21.233078 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | exit 0
I1031 19:38:21.321416 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | SSH cmd err, output: <nil>:
I1031 19:38:21.321721 745229 main.go:134] libmachine: (multinode-190825-m03) KVM machine creation complete!
I1031 19:38:21.322075 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetConfigRaw
I1031 19:38:21.322623 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:38:21.322825 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:38:21.322980 745229 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes...
I1031 19:38:21.322991 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetState
I1031 19:38:21.324182 745229 main.go:134] libmachine: Detecting operating system of created instance...
I1031 19:38:21.324191 745229 main.go:134] libmachine: Waiting for SSH to be available...
I1031 19:38:21.324197 745229 main.go:134] libmachine: Getting to WaitForSSH function...
I1031 19:38:21.324203 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:21.326396 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.326808 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:21.326834 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.326984 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:21.327146 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.327339 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.327475 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:21.327690 745229 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:21.327861 745229 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.254 22 <nil> <nil>}
I1031 19:38:21.327866 745229 main.go:134] libmachine: About to run SSH command:
exit 0
I1031 19:38:21.440855 745229 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1031 19:38:21.440869 745229 main.go:134] libmachine: Detecting the provisioner...
I1031 19:38:21.440877 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:21.444109 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.444422 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:21.444451 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.444605 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:21.444846 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.445018 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.445193 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:21.445349 745229 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:21.445488 745229 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.254 22 <nil> <nil>}
I1031 19:38:21.445496 745229 main.go:134] libmachine: About to run SSH command:
cat /etc/os-release
I1031 19:38:21.562304 745229 main.go:134] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-gb347f1c-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I1031 19:38:21.562376 745229 main.go:134] libmachine: found compatible host: buildroot
I1031 19:38:21.562386 745229 main.go:134] libmachine: Provisioning with buildroot...
I1031 19:38:21.562396 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetMachineName
I1031 19:38:21.562703 745229 buildroot.go:166] provisioning hostname "multinode-190825-m03"
I1031 19:38:21.562734 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetMachineName
I1031 19:38:21.562960 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:21.566039 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.566465 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:21.566488 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.566674 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:21.566874 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.567041 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.567198 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:21.567378 745229 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:21.567528 745229 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.254 22 <nil> <nil>}
I1031 19:38:21.567537 745229 main.go:134] libmachine: About to run SSH command:
sudo hostname multinode-190825-m03 && echo "multinode-190825-m03" | sudo tee /etc/hostname
I1031 19:38:21.697916 745229 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-190825-m03
I1031 19:38:21.697960 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:21.701231 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.701678 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:21.701707 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.701883 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:21.702060 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.702195 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.702401 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:21.702544 745229 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:21.702673 745229 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.254 22 <nil> <nil>}
I1031 19:38:21.702685 745229 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-190825-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-190825-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-190825-m03' | sudo tee -a /etc/hosts;
fi
fi
I1031 19:38:21.826694 745229 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1031 19:38:21.826724 745229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15232-712341/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-712341/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-712341/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-712341/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-712341/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-712341/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-712341/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-712341/.minikube}
I1031 19:38:21.826748 745229 buildroot.go:174] setting up certificates
I1031 19:38:21.826762 745229 provision.go:83] configureAuth start
I1031 19:38:21.826771 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetMachineName
I1031 19:38:21.827068 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetIP
I1031 19:38:21.830205 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.830651 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:21.830688 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.830931 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:21.833462 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.833842 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:21.833865 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.834016 745229 provision.go:138] copyHostCerts
I1031 19:38:21.834097 745229 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-712341/.minikube/ca.pem, removing ...
I1031 19:38:21.834107 745229 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-712341/.minikube/ca.pem
I1031 19:38:21.834170 745229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-712341/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-712341/.minikube/ca.pem (1082 bytes)
I1031 19:38:21.834256 745229 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-712341/.minikube/cert.pem, removing ...
I1031 19:38:21.834260 745229 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-712341/.minikube/cert.pem
I1031 19:38:21.834291 745229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-712341/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-712341/.minikube/cert.pem (1123 bytes)
I1031 19:38:21.834366 745229 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-712341/.minikube/key.pem, removing ...
I1031 19:38:21.834369 745229 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-712341/.minikube/key.pem
I1031 19:38:21.834392 745229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-712341/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-712341/.minikube/key.pem (1679 bytes)
I1031 19:38:21.834431 745229 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-712341/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-712341/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-712341/.minikube/certs/ca-key.pem org=jenkins.multinode-190825-m03 san=[192.168.50.254 192.168.50.254 localhost 127.0.0.1 minikube multinode-190825-m03]
I1031 19:38:21.962935 745229 provision.go:172] copyRemoteCerts
I1031 19:38:21.962976 745229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1031 19:38:21.963000 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:21.966027 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.966459 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:21.966510 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:21.966681 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:21.966832 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:21.966941 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:21.967128 745229 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/id_rsa Username:docker}
I1031 19:38:22.055104 745229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-712341/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1031 19:38:22.081718 745229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-712341/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I1031 19:38:22.107203 745229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-712341/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1031 19:38:22.130718 745229 provision.go:86] duration metric: configureAuth took 303.940566ms
I1031 19:38:22.130740 745229 buildroot.go:189] setting minikube options for container-runtime
I1031 19:38:22.130917 745229 config.go:180] Loaded profile config "multinode-190825-m03": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 19:38:22.130937 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:38:22.131228 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:22.134148 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:22.134479 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:22.134514 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:22.134626 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:22.134815 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:22.134984 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:22.135095 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:22.135254 745229 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:22.135374 745229 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.254 22 <nil> <nil>}
I1031 19:38:22.135380 745229 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1031 19:38:22.251479 745229 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I1031 19:38:22.251498 745229 buildroot.go:70] root file system type: tmpfs
I1031 19:38:22.251742 745229 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1031 19:38:22.251765 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:22.254819 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:22.255247 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:22.255276 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:22.255460 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:22.255675 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:22.255874 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:22.256043 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:22.256218 745229 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:22.256349 745229 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.254 22 <nil> <nil>}
I1031 19:38:22.256399 745229 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1031 19:38:22.382443 745229 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1031 19:38:22.382476 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:22.385478 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:22.385944 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:22.385970 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:22.386163 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:22.386462 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:22.386703 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:22.386913 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:22.387135 745229 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:22.387301 745229 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.254 22 <nil> <nil>}
I1031 19:38:22.387314 745229 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1031 19:38:23.156546 745229 main.go:134] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1031 19:38:23.156565 745229 main.go:134] libmachine: Checking connection to Docker...
I1031 19:38:23.156574 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetURL
I1031 19:38:23.158169 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | Using libvirt version 6000000
I1031 19:38:23.160952 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.161363 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:23.161408 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.161610 745229 main.go:134] libmachine: Docker is up and running!
I1031 19:38:23.161622 745229 main.go:134] libmachine: Reticulating splines...
I1031 19:38:23.161629 745229 client.go:171] LocalClient.Create took 24.438252566s
I1031 19:38:23.161652 745229 start.go:167] duration metric: libmachine.API.Create for "multinode-190825-m03" took 24.438300456s
I1031 19:38:23.161662 745229 start.go:300] post-start starting for "multinode-190825-m03" (driver="kvm2")
I1031 19:38:23.161667 745229 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1031 19:38:23.161686 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:38:23.161977 745229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1031 19:38:23.162000 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:23.164719 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.165155 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:23.165187 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.165357 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:23.165610 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:23.165816 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:23.166015 745229 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/id_rsa Username:docker}
I1031 19:38:23.251784 745229 ssh_runner.go:195] Run: cat /etc/os-release
I1031 19:38:23.256096 745229 info.go:137] Remote host: Buildroot 2021.02.12
I1031 19:38:23.256112 745229 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-712341/.minikube/addons for local assets ...
I1031 19:38:23.256174 745229 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-712341/.minikube/files for local assets ...
I1031 19:38:23.256245 745229 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-712341/.minikube/files/etc/ssl/certs/7195012.pem -> 7195012.pem in /etc/ssl/certs
I1031 19:38:23.256322 745229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1031 19:38:23.265382 745229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-712341/.minikube/files/etc/ssl/certs/7195012.pem --> /etc/ssl/certs/7195012.pem (1708 bytes)
I1031 19:38:23.289025 745229 start.go:303] post-start completed in 127.349159ms
I1031 19:38:23.289083 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetConfigRaw
I1031 19:38:23.289831 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetIP
I1031 19:38:23.292694 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.293064 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:23.293086 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.293343 745229 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-712341/.minikube/profiles/multinode-190825-m03/config.json ...
I1031 19:38:23.293544 745229 start.go:128] duration metric: createHost completed in 24.589751332s
I1031 19:38:23.293562 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:23.295839 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.296139 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:23.296166 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.296320 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:23.296528 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:23.296716 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:23.296847 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:23.296985 745229 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:23.297106 745229 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.254 22 <nil> <nil>}
I1031 19:38:23.297112 745229 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1031 19:38:23.410302 745229 main.go:134] libmachine: SSH cmd err, output: <nil>: 1667245103.392176876
I1031 19:38:23.410319 745229 fix.go:207] guest clock: 1667245103.392176876
I1031 19:38:23.410332 745229 fix.go:220] Guest: 2022-10-31 19:38:23.392176876 +0000 UTC Remote: 2022-10-31 19:38:23.293550701 +0000 UTC m=+24.732481164 (delta=98.626175ms)
I1031 19:38:23.410366 745229 fix.go:191] guest clock delta is within tolerance: 98.626175ms
I1031 19:38:23.410372 745229 start.go:83] releasing machines lock for "multinode-190825-m03", held for 24.706645816s
I1031 19:38:23.410421 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:38:23.410787 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetIP
I1031 19:38:23.413765 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.414175 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:23.414199 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.414405 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:38:23.414967 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:38:23.415191 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .DriverName
I1031 19:38:23.415296 745229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1031 19:38:23.415341 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:23.415481 745229 ssh_runner.go:195] Run: systemctl --version
I1031 19:38:23.415501 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHHostname
I1031 19:38:23.418356 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.418647 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.418713 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:23.418735 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.418865 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:23.419025 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:23.419108 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:99:19", ip: ""} in network mk-multinode-190825-m03: {Iface:virbr2 ExpiryTime:2022-10-31 20:38:14 +0000 UTC Type:0 Mac:52:54:00:e7:99:19 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:multinode-190825-m03 Clientid:01:52:54:00:e7:99:19}
I1031 19:38:23.419133 745229 main.go:134] libmachine: (multinode-190825-m03) DBG | domain multinode-190825-m03 has defined IP address 192.168.50.254 and MAC address 52:54:00:e7:99:19 in network mk-multinode-190825-m03
I1031 19:38:23.419145 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:23.419297 745229 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/id_rsa Username:docker}
I1031 19:38:23.419360 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHPort
I1031 19:38:23.419535 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHKeyPath
I1031 19:38:23.419706 745229 main.go:134] libmachine: (multinode-190825-m03) Calling .GetSSHUsername
I1031 19:38:23.419868 745229 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-712341/.minikube/machines/multinode-190825-m03/id_rsa Username:docker}
I1031 19:38:23.527705 745229 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1031 19:38:23.527800 745229 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1031 19:38:23.558785 745229 docker.go:613] Got preloaded images:
I1031 19:38:23.558800 745229 docker.go:619] registry.k8s.io/kube-apiserver:v1.25.3 wasn't preloaded
I1031 19:38:23.558848 745229 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1031 19:38:23.568765 745229 ssh_runner.go:195] Run: which lz4
I1031 19:38:23.572573 745229 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I1031 19:38:23.576710 745229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1031 19:38:23.576733 745229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-712341/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (404166592 bytes)
*
* ==> Docker <==
* -- Journal begins at Mon 2022-10-31 19:28:14 UTC, ends at Mon 2022-10-31 19:38:26 UTC. --
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.057226768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.057277202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.057286508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.057843072Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ce295a05881fd90ce4d0fee336a363a11c4df93aadb4f917b577e1610bcb5a14 pid=2089 runtime=io.containerd.runc.v2
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.430234105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.430550867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.430651972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.430890639Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a1c5d3984100399ced38eb641b874fa63451ddee7ee5ef9fe22238aa62c0cd17 pid=2158 runtime=io.containerd.runc.v2
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.837482679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.838236731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.838527668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 19:28:41 multinode-190825 dockerd[889]: time="2022-10-31T19:28:41.841300123Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/53674648baf73960a1cf53a40cc626e7d313dde02abfa66de84838d18c02e726 pid=2221 runtime=io.containerd.runc.v2
Oct 31 19:28:44 multinode-190825 dockerd[889]: time="2022-10-31T19:28:44.721477867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 19:28:44 multinode-190825 dockerd[889]: time="2022-10-31T19:28:44.721750360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 19:28:44 multinode-190825 dockerd[889]: time="2022-10-31T19:28:44.721864299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 19:28:44 multinode-190825 dockerd[889]: time="2022-10-31T19:28:44.723209257Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/84e47101c9034bda802455f7a1ba7b59e130e54e7391c125caad89cda1d9bac9 pid=2376 runtime=io.containerd.runc.v2
Oct 31 19:29:12 multinode-190825 dockerd[889]: time="2022-10-31T19:29:12.034817897Z" level=info msg="shim disconnected" id=53674648baf73960a1cf53a40cc626e7d313dde02abfa66de84838d18c02e726
Oct 31 19:29:12 multinode-190825 dockerd[883]: time="2022-10-31T19:29:12.035813005Z" level=info msg="ignoring event" container=53674648baf73960a1cf53a40cc626e7d313dde02abfa66de84838d18c02e726 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 31 19:29:12 multinode-190825 dockerd[889]: time="2022-10-31T19:29:12.036744422Z" level=warning msg="cleaning up after shim disconnected" id=53674648baf73960a1cf53a40cc626e7d313dde02abfa66de84838d18c02e726 namespace=moby
Oct 31 19:29:12 multinode-190825 dockerd[889]: time="2022-10-31T19:29:12.037709882Z" level=info msg="cleaning up dead shim"
Oct 31 19:29:12 multinode-190825 dockerd[889]: time="2022-10-31T19:29:12.050587524Z" level=warning msg="cleanup warnings time=\"2022-10-31T19:29:12Z\" level=info msg=\"starting signal loop\" namespace=moby pid=2790 runtime=io.containerd.runc.v2\n"
Oct 31 19:29:23 multinode-190825 dockerd[889]: time="2022-10-31T19:29:23.672514180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 19:29:23 multinode-190825 dockerd[889]: time="2022-10-31T19:29:23.672598218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 19:29:23 multinode-190825 dockerd[889]: time="2022-10-31T19:29:23.672610023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 19:29:23 multinode-190825 dockerd[889]: time="2022-10-31T19:29:23.673419246Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/20a3859c2046a80cfcdc0a5d3347b3d9affdfbf9cdf3a882610358c865a25010 pid=2963 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
20a3859c2046a 6e38f40d628db 9 minutes ago Running storage-provisioner 3 68840b4d2254a
84e47101c9034 d6e3e26021b60 9 minutes ago Running kindnet-cni 2 127d1d9f795e4
53674648baf73 6e38f40d628db 9 minutes ago Exited storage-provisioner 2 68840b4d2254a
a1c5d39841003 beaaf00edd38a 9 minutes ago Running kube-proxy 2 ce295a05881fd
bf97ff917b943 a8a176a5d5d69 9 minutes ago Running etcd 2 06c89abe2ecf8
7d5b16337b7e0 6d23ec0e8b87e 9 minutes ago Running kube-scheduler 2 a9fa49666ff0e
3b95e48af01ec 6039992312758 9 minutes ago Running kube-controller-manager 2 a41f65b62dbd5
d8f7d7f4426fa 0346dbd74bcb9 9 minutes ago Running kube-apiserver 2 4cdea99e637e5
502c180b64252 d6e3e26021b60 24 minutes ago Exited kindnet-cni 1 8595a81dac1ef
848cc50afe691 beaaf00edd38a 24 minutes ago Exited kube-proxy 1 d6d6a89c639a2
804087c06de12 a8a176a5d5d69 24 minutes ago Exited etcd 1 257e646a7f583
f3440cf645155 6d23ec0e8b87e 24 minutes ago Exited kube-scheduler 1 902da1d692b6f
73d47ffbb93a9 6039992312758 24 minutes ago Exited kube-controller-manager 1 6c72ea741b8ef
a47324bf28327 0346dbd74bcb9 24 minutes ago Exited kube-apiserver 1 e8f615e828717
22d59bb452ad7 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 27 minutes ago Exited busybox 0 6a6d76db675e8
7b28ffeb4bcb4 5185b96f0becf 28 minutes ago Exited coredns 0 7893985ada3ba
*
* ==> coredns [7b28ffeb4bcb] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: multinode-190825
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-190825
kubernetes.io/os=linux
minikube.k8s.io/commit=1c73d673499e72567c9d9cb6c201ec071d452750
minikube.k8s.io/name=multinode-190825
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_10_31T19_09_19_0700
minikube.k8s.io/version=v1.27.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 31 Oct 2022 19:09:15 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-190825
AcquireTime: <unset>
RenewTime: Mon, 31 Oct 2022 19:38:20 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 31 Oct 2022 19:34:05 +0000 Mon, 31 Oct 2022 19:09:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 31 Oct 2022 19:34:05 +0000 Mon, 31 Oct 2022 19:09:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 31 Oct 2022 19:34:05 +0000 Mon, 31 Oct 2022 19:09:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 31 Oct 2022 19:34:05 +0000 Mon, 31 Oct 2022 19:28:58 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.198
Hostname: multinode-190825
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 296b4a01a4d647c18e6f1455d75dc885
System UUID: 296b4a01-a4d6-47c1-8e6f-1455d75dc885
Boot ID: 10998645-31a3-4578-9b6f-f2f0c1040997
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-65db55d5d6-lv6nd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system coredns-565d847f94-8mj8f 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 28m
kube-system etcd-multinode-190825 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kindnet-sjmgp 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 28m
kube-system kube-apiserver-multinode-190825 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-controller-manager-multinode-190825 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-proxy-pqmxb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
kube-system kube-scheduler-multinode-190825 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 28m kube-proxy
Normal Starting 9m44s kube-proxy
Normal Starting 24m kube-proxy
Normal NodeHasSufficientMemory 29m (x5 over 29m) kubelet Node multinode-190825 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29m (x5 over 29m) kubelet Node multinode-190825 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29m (x4 over 29m) kubelet Node multinode-190825 status is now: NodeHasSufficientPID
Normal Starting 29m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 29m kubelet Node multinode-190825 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29m kubelet Node multinode-190825 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29m kubelet Node multinode-190825 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 28m node-controller Node multinode-190825 event: Registered Node multinode-190825 in Controller
Normal NodeReady 28m kubelet Node multinode-190825 status is now: NodeReady
Normal Starting 24m kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 24m (x8 over 24m) kubelet Node multinode-190825 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 24m (x8 over 24m) kubelet Node multinode-190825 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 24m (x7 over 24m) kubelet Node multinode-190825 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 24m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 24m node-controller Node multinode-190825 event: Registered Node multinode-190825 in Controller
Normal Starting 9m56s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9m56s (x8 over 9m56s) kubelet Node multinode-190825 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m56s (x8 over 9m56s) kubelet Node multinode-190825 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m56s (x7 over 9m56s) kubelet Node multinode-190825 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9m56s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 9m35s node-controller Node multinode-190825 event: Registered Node multinode-190825 in Controller
Name: multinode-190825-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-190825-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 31 Oct 2022 19:33:35 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-190825-m02
AcquireTime: <unset>
RenewTime: Mon, 31 Oct 2022 19:38:20 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 31 Oct 2022 19:33:56 +0000 Mon, 31 Oct 2022 19:33:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 31 Oct 2022 19:33:56 +0000 Mon, 31 Oct 2022 19:33:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 31 Oct 2022 19:33:56 +0000 Mon, 31 Oct 2022 19:33:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 31 Oct 2022 19:33:56 +0000 Mon, 31 Oct 2022 19:33:56 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.34
Hostname: multinode-190825-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 126a2f82afdd4e3b9a8b47ae96adc289
System UUID: 126a2f82-afdd-4e3b-9a8b-47ae96adc289
Boot ID: e4bf1460-d59b-4b8b-a4af-e85855c70a4c
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-65db55d5d6-86z7z 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system kindnet-q5s42 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 27m
kube-system kube-proxy-ztszv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 19m kube-proxy
Normal Starting 27m kube-proxy
Normal Starting 4m48s kube-proxy
Normal NodeHasNoDiskPressure 27m (x8 over 27m) kubelet Node multinode-190825-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 27m (x8 over 27m) kubelet Node multinode-190825-m02 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientMemory 19m (x2 over 19m) kubelet Node multinode-190825-m02 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 19m (x2 over 19m) kubelet Node multinode-190825-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 19m (x2 over 19m) kubelet Node multinode-190825-m02 status is now: NodeHasNoDiskPressure
Normal Starting 19m kubelet Starting kubelet.
Normal NodeReady 19m kubelet Node multinode-190825-m02 status is now: NodeReady
Normal Starting 4m51s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m51s (x2 over 4m51s) kubelet Node multinode-190825-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m51s (x2 over 4m51s) kubelet Node multinode-190825-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m51s (x2 over 4m51s) kubelet Node multinode-190825-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m51s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m30s kubelet Node multinode-190825-m02 status is now: NodeReady
*
* ==> dmesg <==
* [Oct31 19:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.073015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.902575] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.185081] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.149538] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.422775] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +5.584877] systemd-fstab-generator[513]: Ignoring "noauto" for root device
[ +0.109355] systemd-fstab-generator[524]: Ignoring "noauto" for root device
[ +1.134672] systemd-fstab-generator[796]: Ignoring "noauto" for root device
[ +0.295002] systemd-fstab-generator[852]: Ignoring "noauto" for root device
[ +0.105035] systemd-fstab-generator[863]: Ignoring "noauto" for root device
[ +0.104117] systemd-fstab-generator[874]: Ignoring "noauto" for root device
[ +1.647044] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
[ +0.101291] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
[ +4.650884] systemd-fstab-generator[1257]: Ignoring "noauto" for root device
[ +0.383375] kauditd_printk_skb: 67 callbacks suppressed
[ +11.609160] kauditd_printk_skb: 8 callbacks suppressed
*
* ==> etcd [804087c06de1] <==
* {"level":"info","ts":"2022-10-31T19:13:49.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 switched to configuration voters=(17425178282036469987)"}
{"level":"info","ts":"2022-10-31T19:13:49.365Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","added-peer-id":"f1d2ab5330a2a0e3","added-peer-peer-urls":["https://192.168.39.198:2380"]}
{"level":"info","ts":"2022-10-31T19:13:49.365Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T19:13:49.365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T19:13:49.373Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-10-31T19:13:49.374Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f1d2ab5330a2a0e3","initial-advertise-peer-urls":["https://192.168.39.198:2380"],"listen-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.198:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-10-31T19:13:49.374Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-10-31T19:13:49.374Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.198:2380"}
{"level":"info","ts":"2022-10-31T19:13:49.374Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.198:2380"}
{"level":"info","ts":"2022-10-31T19:13:50.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 is starting a new election at term 2"}
{"level":"info","ts":"2022-10-31T19:13:50.612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 2"}
{"level":"info","ts":"2022-10-31T19:13:50.612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 2"}
{"level":"info","ts":"2022-10-31T19:13:50.612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 3"}
{"level":"info","ts":"2022-10-31T19:13:50.612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 3"}
{"level":"info","ts":"2022-10-31T19:13:50.612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 3"}
{"level":"info","ts":"2022-10-31T19:13:50.612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 3"}
{"level":"info","ts":"2022-10-31T19:13:50.614Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:multinode-190825 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
{"level":"info","ts":"2022-10-31T19:13:50.615Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T19:13:50.615Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T19:13:50.617Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.198:2379"}
{"level":"info","ts":"2022-10-31T19:13:50.620Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-10-31T19:13:50.621Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-10-31T19:13:50.621Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-10-31T19:23:50.644Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1118}
{"level":"info","ts":"2022-10-31T19:23:50.666Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1118,"took":"21.665506ms"}
*
* ==> etcd [bf97ff917b94] <==
* {"level":"info","ts":"2022-10-31T19:28:34.639Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f1d2ab5330a2a0e3","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-10-31T19:28:34.640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 switched to configuration voters=(17425178282036469987)"}
{"level":"info","ts":"2022-10-31T19:28:34.641Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","added-peer-id":"f1d2ab5330a2a0e3","added-peer-peer-urls":["https://192.168.39.198:2380"]}
{"level":"info","ts":"2022-10-31T19:28:34.641Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T19:28:34.641Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T19:28:34.649Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.198:2380"}
{"level":"info","ts":"2022-10-31T19:28:34.649Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.198:2380"}
{"level":"info","ts":"2022-10-31T19:28:34.649Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f1d2ab5330a2a0e3","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2022-10-31T19:28:34.646Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-10-31T19:28:34.658Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f1d2ab5330a2a0e3","initial-advertise-peer-urls":["https://192.168.39.198:2380"],"listen-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.198:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-10-31T19:28:34.658Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-10-31T19:28:35.419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 is starting a new election at term 3"}
{"level":"info","ts":"2022-10-31T19:28:35.419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 3"}
{"level":"info","ts":"2022-10-31T19:28:35.419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 3"}
{"level":"info","ts":"2022-10-31T19:28:35.419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 4"}
{"level":"info","ts":"2022-10-31T19:28:35.420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 4"}
{"level":"info","ts":"2022-10-31T19:28:35.420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 4"}
{"level":"info","ts":"2022-10-31T19:28:35.420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 4"}
{"level":"info","ts":"2022-10-31T19:28:35.421Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:multinode-190825 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
{"level":"info","ts":"2022-10-31T19:28:35.422Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T19:28:35.422Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T19:28:35.424Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.198:2379"}
{"level":"info","ts":"2022-10-31T19:28:35.424Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-10-31T19:28:35.424Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-10-31T19:28:35.425Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 19:38:27 up 10 min, 0 users, load average: 1.79, 0.77, 0.39
Linux multinode-190825 5.10.57 #1 SMP Fri Oct 28 21:02:11 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [a47324bf2832] <==
* I1031 19:13:52.962115 1 controller.go:85] Starting OpenAPI controller
I1031 19:13:52.962333 1 controller.go:85] Starting OpenAPI V3 controller
I1031 19:13:52.963118 1 naming_controller.go:291] Starting NamingConditionController
I1031 19:13:52.963416 1 establishing_controller.go:76] Starting EstablishingController
I1031 19:13:52.963658 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1031 19:13:52.963896 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1031 19:13:52.964082 1 crd_finalizer.go:266] Starting CRDFinalizer
I1031 19:13:53.073609 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
E1031 19:13:53.085437 1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1031 19:13:53.144866 1 apf_controller.go:305] Running API Priority and Fairness config worker
I1031 19:13:53.146226 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1031 19:13:53.147308 1 cache.go:39] Caches are synced for autoregister controller
I1031 19:13:53.152042 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1031 19:13:53.152262 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1031 19:13:53.153855 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1031 19:13:53.156184 1 shared_informer.go:262] Caches are synced for node_authorizer
I1031 19:13:53.686044 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1031 19:13:53.952100 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1031 19:13:55.771859 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1031 19:13:55.987294 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I1031 19:13:55.998907 1 controller.go:616] quota admission added evaluator for: deployments.apps
I1031 19:13:56.077266 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1031 19:13:56.083377 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1031 19:14:05.952623 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1031 19:14:06.052315 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [d8f7d7f4426f] <==
* I1031 19:28:38.369084 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I1031 19:28:38.379463 1 controller.go:85] Starting OpenAPI controller
I1031 19:28:38.379528 1 controller.go:85] Starting OpenAPI V3 controller
I1031 19:28:38.379682 1 naming_controller.go:291] Starting NamingConditionController
I1031 19:28:38.379750 1 establishing_controller.go:76] Starting EstablishingController
I1031 19:28:38.379891 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1031 19:28:38.379936 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1031 19:28:38.379956 1 crd_finalizer.go:266] Starting CRDFinalizer
I1031 19:28:38.445310 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1031 19:28:38.455499 1 apf_controller.go:305] Running API Priority and Fairness config worker
I1031 19:28:38.456411 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1031 19:28:38.456738 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1031 19:28:38.456918 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I1031 19:28:38.458339 1 cache.go:39] Caches are synced for autoregister controller
E1031 19:28:38.461864 1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1031 19:28:38.473958 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1031 19:28:38.493667 1 shared_informer.go:262] Caches are synced for node_authorizer
I1031 19:28:39.079483 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1031 19:28:39.354541 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1031 19:28:40.810656 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1031 19:28:41.035540 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I1031 19:28:41.051802 1 controller.go:616] quota admission added evaluator for: deployments.apps
I1031 19:28:41.160262 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1031 19:28:41.172749 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1031 19:29:41.181562 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [3b95e48af01e] <==
* I1031 19:28:51.688348 1 shared_informer.go:262] Caches are synced for deployment
I1031 19:28:51.754223 1 shared_informer.go:262] Caches are synced for resource quota
I1031 19:28:51.758667 1 shared_informer.go:262] Caches are synced for endpoint
I1031 19:28:51.774760 1 shared_informer.go:262] Caches are synced for endpoint_slice
I1031 19:28:51.816525 1 shared_informer.go:262] Caches are synced for resource quota
I1031 19:28:51.837733 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I1031 19:28:52.195273 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 19:28:52.213808 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 19:28:52.213852 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W1031 19:28:58.855536 1 topologycache.go:199] Can't get CPU or zone information for multinode-190825-m02 node
I1031 19:29:01.717636 1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
I1031 19:29:01.717801 1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94-8mj8f" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-565d847f94-8mj8f"
I1031 19:29:01.717902 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-lv6nd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-lv6nd"
I1031 19:29:31.688033 1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-tdtkt"
I1031 19:29:31.703535 1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-tdtkt"
I1031 19:29:31.703666 1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-fzphd"
I1031 19:29:31.744676 1 event.go:294] "Event occurred" object="multinode-190825-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-190825-m02 status is now: NodeNotReady"
I1031 19:29:31.784859 1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-fzphd"
I1031 19:29:31.800863 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-ztszv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 19:29:31.822805 1 event.go:294] "Event occurred" object="kube-system/kindnet-q5s42" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
W1031 19:33:35.399729 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-190825-m02" does not exist
I1031 19:33:35.400678 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-86z7z" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-86z7z"
I1031 19:33:35.408379 1 range_allocator.go:367] Set node multinode-190825-m02 PodCIDR to [10.244.1.0/24]
W1031 19:33:56.111715 1 topologycache.go:199] Can't get CPU or zone information for multinode-190825-m02 node
I1031 19:33:56.874791 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-86z7z" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-86z7z"
*
* ==> kube-controller-manager [73d47ffbb93a] <==
* I1031 19:14:15.727085 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-lv6nd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-lv6nd"
I1031 19:14:15.727094 1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94-8mj8f" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-565d847f94-8mj8f"
W1031 19:14:45.740455 1 topologycache.go:199] Can't get CPU or zone information for multinode-190825-m03 node
I1031 19:14:45.742877 1 event.go:294] "Event occurred" object="multinode-190825-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-190825-m02 status is now: NodeNotReady"
I1031 19:14:45.758585 1 event.go:294] "Event occurred" object="kube-system/kindnet-q5s42" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 19:14:45.766730 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-ztszv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 19:14:45.778081 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-86z7z" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 19:14:45.793115 1 event.go:294] "Event occurred" object="multinode-190825-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-190825-m03 status is now: NodeNotReady"
I1031 19:14:45.806966 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-tdtkt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 19:14:45.819931 1 event.go:294] "Event occurred" object="kube-system/kindnet-fzphd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1031 19:18:49.707908 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-vnk7g"
W1031 19:18:53.583447 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-190825-m02" does not exist
I1031 19:18:53.583852 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-86z7z" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-86z7z"
I1031 19:18:53.591636 1 range_allocator.go:367] Set node multinode-190825-m02 PodCIDR to [10.244.1.0/24]
W1031 19:19:03.921826 1 topologycache.go:199] Can't get CPU or zone information for multinode-190825-m02 node
I1031 19:19:05.879805 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-86z7z" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-86z7z"
W1031 19:23:29.303776 1 topologycache.go:199] Can't get CPU or zone information for multinode-190825-m02 node
W1031 19:23:30.208039 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-190825-m03" does not exist
W1031 19:23:30.208371 1 topologycache.go:199] Can't get CPU or zone information for multinode-190825-m02 node
I1031 19:23:30.215931 1 range_allocator.go:367] Set node multinode-190825-m03 PodCIDR to [10.244.2.0/24]
W1031 19:23:50.473826 1 topologycache.go:199] Can't get CPU or zone information for multinode-190825-m02 node
I1031 19:23:50.938448 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-vnk7g" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-vnk7g"
I1031 19:27:53.046201 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-ds7ds"
W1031 19:27:55.064766 1 topologycache.go:199] Can't get CPU or zone information for multinode-190825-m02 node
I1031 19:27:55.979458 1 event.go:294] "Event occurred" object="multinode-190825-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-190825-m03 event: Removing Node multinode-190825-m03 from Controller"
*
* ==> kube-proxy [848cc50afe69] <==
* I1031 19:13:55.616272 1 node.go:163] Successfully retrieved node IP: 192.168.39.198
I1031 19:13:55.616580 1 server_others.go:138] "Detected node IP" address="192.168.39.198"
I1031 19:13:55.616779 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1031 19:13:55.719008 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I1031 19:13:55.719048 1 server_others.go:206] "Using iptables Proxier"
I1031 19:13:55.719390 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 19:13:55.720130 1 server.go:661] "Version info" version="v1.25.3"
I1031 19:13:55.720168 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 19:13:55.723767 1 config.go:444] "Starting node config controller"
I1031 19:13:55.723879 1 shared_informer.go:255] Waiting for caches to sync for node config
I1031 19:13:55.730594 1 config.go:317] "Starting service config controller"
I1031 19:13:55.730938 1 shared_informer.go:255] Waiting for caches to sync for service config
I1031 19:13:55.731064 1 config.go:226] "Starting endpoint slice config controller"
I1031 19:13:55.731070 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1031 19:13:55.825075 1 shared_informer.go:262] Caches are synced for node config
I1031 19:13:55.831429 1 shared_informer.go:262] Caches are synced for service config
I1031 19:13:55.832612 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-proxy [a1c5d3984100] <==
* I1031 19:28:41.826770 1 node.go:163] Successfully retrieved node IP: 192.168.39.198
I1031 19:28:41.827161 1 server_others.go:138] "Detected node IP" address="192.168.39.198"
I1031 19:28:41.827309 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1031 19:28:41.930227 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I1031 19:28:41.930244 1 server_others.go:206] "Using iptables Proxier"
I1031 19:28:41.930705 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 19:28:41.931353 1 server.go:661] "Version info" version="v1.25.3"
I1031 19:28:41.931367 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 19:28:41.933309 1 config.go:317] "Starting service config controller"
I1031 19:28:41.933325 1 shared_informer.go:255] Waiting for caches to sync for service config
I1031 19:28:41.933347 1 config.go:226] "Starting endpoint slice config controller"
I1031 19:28:41.933350 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1031 19:28:41.935710 1 config.go:444] "Starting node config controller"
I1031 19:28:41.935851 1 shared_informer.go:255] Waiting for caches to sync for node config
I1031 19:28:42.034093 1 shared_informer.go:262] Caches are synced for service config
I1031 19:28:42.034303 1 shared_informer.go:262] Caches are synced for endpoint slice config
I1031 19:28:42.036534 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [7d5b16337b7e] <==
* I1031 19:28:34.605263 1 serving.go:348] Generated self-signed cert in-memory
I1031 19:28:38.496180 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1031 19:28:38.496201 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 19:28:38.516758 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1031 19:28:38.516956 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1031 19:28:38.516964 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1031 19:28:38.517474 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I1031 19:28:38.517638 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1031 19:28:38.517649 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1031 19:28:38.517663 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1031 19:28:38.517667 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1031 19:28:38.618458 1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I1031 19:28:38.619202 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1031 19:28:38.619543 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
E1031 19:29:34.672375 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"busybox-65db55d5d6-ds7ds.17233e1ac20d5de8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1148f878-fb42-4663-923a-f74e1d4f38f8", ResourceVersion:"1874", Generation:0, CreationTimestamp:time.Date(2022, time.October, 31, 19, 29, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:time.Date(2022, time.October, 31, 19, 29, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009dae28), Subresource:""}}}, EventTime:time.Date(2022, time.October, 31, 19, 29, 31, 702936634, time.Local), Series:(*v1.EventSeries)(
0xc000641e80), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-multinode-190825", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"busybox-65db55d5d6-ds7ds", UID:"c27861b3-c842-4e00-ada8-6511d71381d9", APIVersion:"v1", ResourceVersion:"1745", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedLastTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedCount:0}': 'Event "busybox-65db55d5d6-ds7ds.17233e1ac20d5de8" is invalid: series.count: Invalid value: "": should be at least 2' (will not retry!)
*
* ==> kube-scheduler [f3440cf64515] <==
* I1031 19:13:49.979563 1 serving.go:348] Generated self-signed cert in-memory
W1031 19:13:53.033142 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1031 19:13:53.033323 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1031 19:13:53.033384 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1031 19:13:53.033396 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1031 19:13:53.083938 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1031 19:13:53.083985 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 19:13:53.094233 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1031 19:13:53.094349 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1031 19:13:53.094735 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1031 19:13:53.097399 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1031 19:13:53.195642 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Mon 2022-10-31 19:28:14 UTC, ends at Mon 2022-10-31 19:38:27 UTC. --
Oct 31 19:37:46 multinode-190825 kubelet[1263]: E1031 19:37:46.586071 1263 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21c0107d-d5d2-416c-b5a1-3ea74f176cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-8mj8f_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-8mj8f" podUID=21c0107d-d5d2-416c-b5a1-3ea74f176cd3
Oct 31 19:37:48 multinode-190825 kubelet[1263]: E1031 19:37:48.583283 1263 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-lv6nd_default\" network: could not retrieve port mappings: key is not found" podSandboxID="6a6d76db675e80ea19fc036cfa34997e4be9b8d2856cbb141aadc26b74148e40"
Oct 31 19:37:48 multinode-190825 kubelet[1263]: E1031 19:37:48.583627 1263 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:6a6d76db675e80ea19fc036cfa34997e4be9b8d2856cbb141aadc26b74148e40}
Oct 31 19:37:48 multinode-190825 kubelet[1263]: E1031 19:37:48.583690 1263 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d911fa0a-d1a3-4302-abe8-4e702d7c396a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-lv6nd_default\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 19:37:48 multinode-190825 kubelet[1263]: E1031 19:37:48.583749 1263 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d911fa0a-d1a3-4302-abe8-4e702d7c396a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-lv6nd_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-lv6nd" podUID=d911fa0a-d1a3-4302-abe8-4e702d7c396a
Oct 31 19:37:57 multinode-190825 kubelet[1263]: E1031 19:37:57.585387 1263 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-8mj8f_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="7893985ada3bae7f958a872a0a8ca2dcd4f2f3dade1711d970a7f9d6370b81d3"
Oct 31 19:37:57 multinode-190825 kubelet[1263]: E1031 19:37:57.585794 1263 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:7893985ada3bae7f958a872a0a8ca2dcd4f2f3dade1711d970a7f9d6370b81d3}
Oct 31 19:37:57 multinode-190825 kubelet[1263]: E1031 19:37:57.585880 1263 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21c0107d-d5d2-416c-b5a1-3ea74f176cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-8mj8f_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 19:37:57 multinode-190825 kubelet[1263]: E1031 19:37:57.585941 1263 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21c0107d-d5d2-416c-b5a1-3ea74f176cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-8mj8f_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-8mj8f" podUID=21c0107d-d5d2-416c-b5a1-3ea74f176cd3
Oct 31 19:38:01 multinode-190825 kubelet[1263]: E1031 19:38:01.589072 1263 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-lv6nd_default\" network: could not retrieve port mappings: key is not found" podSandboxID="6a6d76db675e80ea19fc036cfa34997e4be9b8d2856cbb141aadc26b74148e40"
Oct 31 19:38:01 multinode-190825 kubelet[1263]: E1031 19:38:01.589202 1263 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:6a6d76db675e80ea19fc036cfa34997e4be9b8d2856cbb141aadc26b74148e40}
Oct 31 19:38:01 multinode-190825 kubelet[1263]: E1031 19:38:01.589232 1263 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d911fa0a-d1a3-4302-abe8-4e702d7c396a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-lv6nd_default\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 19:38:01 multinode-190825 kubelet[1263]: E1031 19:38:01.589256 1263 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d911fa0a-d1a3-4302-abe8-4e702d7c396a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-lv6nd_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-lv6nd" podUID=d911fa0a-d1a3-4302-abe8-4e702d7c396a
Oct 31 19:38:11 multinode-190825 kubelet[1263]: E1031 19:38:11.584511 1263 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-8mj8f_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="7893985ada3bae7f958a872a0a8ca2dcd4f2f3dade1711d970a7f9d6370b81d3"
Oct 31 19:38:11 multinode-190825 kubelet[1263]: E1031 19:38:11.584568 1263 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:7893985ada3bae7f958a872a0a8ca2dcd4f2f3dade1711d970a7f9d6370b81d3}
Oct 31 19:38:11 multinode-190825 kubelet[1263]: E1031 19:38:11.584607 1263 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21c0107d-d5d2-416c-b5a1-3ea74f176cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-8mj8f_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 19:38:11 multinode-190825 kubelet[1263]: E1031 19:38:11.584642 1263 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21c0107d-d5d2-416c-b5a1-3ea74f176cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-8mj8f_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-8mj8f" podUID=21c0107d-d5d2-416c-b5a1-3ea74f176cd3
Oct 31 19:38:15 multinode-190825 kubelet[1263]: E1031 19:38:15.582546 1263 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-lv6nd_default\" network: could not retrieve port mappings: key is not found" podSandboxID="6a6d76db675e80ea19fc036cfa34997e4be9b8d2856cbb141aadc26b74148e40"
Oct 31 19:38:15 multinode-190825 kubelet[1263]: E1031 19:38:15.582585 1263 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:6a6d76db675e80ea19fc036cfa34997e4be9b8d2856cbb141aadc26b74148e40}
Oct 31 19:38:15 multinode-190825 kubelet[1263]: E1031 19:38:15.582621 1263 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d911fa0a-d1a3-4302-abe8-4e702d7c396a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-lv6nd_default\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 19:38:15 multinode-190825 kubelet[1263]: E1031 19:38:15.582697 1263 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d911fa0a-d1a3-4302-abe8-4e702d7c396a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-lv6nd_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-lv6nd" podUID=d911fa0a-d1a3-4302-abe8-4e702d7c396a
Oct 31 19:38:25 multinode-190825 kubelet[1263]: E1031 19:38:25.584060 1263 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-8mj8f_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="7893985ada3bae7f958a872a0a8ca2dcd4f2f3dade1711d970a7f9d6370b81d3"
Oct 31 19:38:25 multinode-190825 kubelet[1263]: E1031 19:38:25.584675 1263 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:7893985ada3bae7f958a872a0a8ca2dcd4f2f3dade1711d970a7f9d6370b81d3}
Oct 31 19:38:25 multinode-190825 kubelet[1263]: E1031 19:38:25.584807 1263 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21c0107d-d5d2-416c-b5a1-3ea74f176cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-8mj8f_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Oct 31 19:38:25 multinode-190825 kubelet[1263]: E1031 19:38:25.584903 1263 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21c0107d-d5d2-416c-b5a1-3ea74f176cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-8mj8f_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-8mj8f" podUID=21c0107d-d5d2-416c-b5a1-3ea74f176cd3
*
* ==> storage-provisioner [20a3859c2046] <==
* I1031 19:29:23.760175 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1031 19:29:23.782067 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1031 19:29:23.782935 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1031 19:29:41.184716 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1031 19:29:41.186255 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f20fc1a-3170-43d5-a27e-b20a043c9074", APIVersion:"v1", ResourceVersion:"1892", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-190825_c69869bd-e92a-4a5d-b2fc-40718907ce03 became leader
I1031 19:29:41.186952 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-190825_c69869bd-e92a-4a5d-b2fc-40718907ce03!
I1031 19:29:41.289517 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-190825_c69869bd-e92a-4a5d-b2fc-40718907ce03!
*
* ==> storage-provisioner [53674648baf7] <==
* I1031 19:28:42.002723 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1031 19:29:12.010367 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-190825 -n multinode-190825
helpers_test.go:261: (dbg) Run: kubectl --context multinode-190825 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-ds7ds
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context multinode-190825 describe pod busybox-65db55d5d6-ds7ds
helpers_test.go:280: (dbg) kubectl --context multinode-190825 describe pod busybox-65db55d5d6-ds7ds:
-- stdout --
Name: busybox-65db55d5d6-ds7ds
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: app=busybox
pod-template-hash=65db55d5d6
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/busybox-65db55d5d6
Containers:
busybox:
Image: gcr.io/k8s-minikube/busybox:1.28
Port: <none>
Host Port: <none>
Command:
sleep
3600
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxmrh (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-xxmrh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 9m49s default-scheduler 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
Warning FailedScheduling 8m56s default-scheduler 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
Warning FailedScheduling 4m31s default-scheduler 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (29.60s)