=== RUN TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-230145
multinode_test.go:450: (dbg) Run: out/minikube-linux-amd64 start -p multinode-230145-m02 --driver=kvm2
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-230145-m02 --driver=kvm2 : exit status 14 (84.770093ms)
-- stdout --
* [multinode-230145-m02] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15232
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15232-3852/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3852/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
-- /stdout --
** stderr **
! Profile name 'multinode-230145-m02' is duplicated with machine name 'multinode-230145-m02' in profile 'multinode-230145'
X Exiting due to MK_USAGE: Profile name should be unique
** /stderr **
multinode_test.go:458: (dbg) Run: out/minikube-linux-amd64 start -p multinode-230145-m03 --driver=kvm2
E1101 23:31:11.156240 10644 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/functional-225022/client.crt: no such file or directory
E1101 23:31:30.596929 10644 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/ingress-addon-legacy-225410/client.crt: no such file or directory
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-230145-m03 --driver=kvm2 : signal: killed (42.341419687s)
-- stdout --
* [multinode-230145-m03] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15232
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15232-3852/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3852/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the kvm2 driver based on user configuration
* Starting control plane node multinode-230145-m03 in cluster multinode-230145-m03
* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
-- /stdout --
multinode_test.go:460: failed to start profile. args "out/minikube-linux-amd64 start -p multinode-230145-m03 --driver=kvm2 " : signal: killed
multinode_test.go:465: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-230145
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-230145: context deadline exceeded (730ns)
multinode_test.go:470: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-230145-m03
multinode_test.go:470: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p multinode-230145-m03: context deadline exceeded (319ns)
multinode_test.go:472: failed to clean temporary profile. args "out/minikube-linux-amd64 delete -p multinode-230145-m03" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-230145 -n multinode-230145
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-230145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-230145 logs -n 25: (1.258950294s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| cp | multinode-230145 cp multinode-230145-m02:/home/docker/cp-test.txt | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145-m03:/home/docker/cp-test_multinode-230145-m02_multinode-230145-m03.txt | | | | | |
| ssh | multinode-230145 ssh -n | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-230145 ssh -n multinode-230145-m03 sudo cat | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | /home/docker/cp-test_multinode-230145-m02_multinode-230145-m03.txt | | | | | |
| cp | multinode-230145 cp testdata/cp-test.txt | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-230145 ssh -n | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-230145 cp multinode-230145-m03:/home/docker/cp-test.txt | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | /tmp/TestMultiNodeserialCopyFile3888472495/001/cp-test_multinode-230145-m03.txt | | | | | |
| ssh | multinode-230145 ssh -n | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-230145 cp multinode-230145-m03:/home/docker/cp-test.txt | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145:/home/docker/cp-test_multinode-230145-m03_multinode-230145.txt | | | | | |
| ssh | multinode-230145 ssh -n | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-230145 ssh -n multinode-230145 sudo cat | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | /home/docker/cp-test_multinode-230145-m03_multinode-230145.txt | | | | | |
| cp | multinode-230145 cp multinode-230145-m03:/home/docker/cp-test.txt | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145-m02:/home/docker/cp-test_multinode-230145-m03_multinode-230145-m02.txt | | | | | |
| ssh | multinode-230145 ssh -n | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | multinode-230145-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-230145 ssh -n multinode-230145-m02 sudo cat | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | /home/docker/cp-test_multinode-230145-m03_multinode-230145-m02.txt | | | | | |
| node | multinode-230145 node stop m03 | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| node | multinode-230145 node start | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:06 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-230145 | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:06 UTC | |
| stop | -p multinode-230145 | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:06 UTC | 01 Nov 22 23:06 UTC |
| start | -p multinode-230145 | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:06 UTC | 01 Nov 22 23:21 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-230145 | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | |
| node | multinode-230145 node delete | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | 01 Nov 22 23:21 UTC |
| | m03 | | | | | |
| stop | multinode-230145 stop | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | 01 Nov 22 23:21 UTC |
| start | -p multinode-230145 | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | 01 Nov 22 23:31 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| node | list -p multinode-230145 | multinode-230145 | jenkins | v1.27.1 | 01 Nov 22 23:31 UTC | |
| start | -p multinode-230145-m02 | multinode-230145-m02 | jenkins | v1.27.1 | 01 Nov 22 23:31 UTC | |
| | --driver=kvm2 | | | | | |
| start | -p multinode-230145-m03 | multinode-230145-m03 | jenkins | v1.27.1 | 01 Nov 22 23:31 UTC | |
| | --driver=kvm2 | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/11/01 23:31:03
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1101 23:31:03.047096 23196 out.go:296] Setting OutFile to fd 1 ...
I1101 23:31:03.047205 23196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:31:03.047208 23196 out.go:309] Setting ErrFile to fd 2...
I1101 23:31:03.047211 23196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:31:03.047357 23196 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3852/.minikube/bin
I1101 23:31:03.047910 23196 out.go:303] Setting JSON to false
I1101 23:31:03.048790 23196 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4418,"bootTime":1667341045,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 23:31:03.048836 23196 start.go:126] virtualization: kvm guest
I1101 23:31:03.050866 23196 out.go:177] * [multinode-230145-m03] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1101 23:31:03.052215 23196 out.go:177] - MINIKUBE_LOCATION=15232
I1101 23:31:03.052162 23196 notify.go:220] Checking for updates...
I1101 23:31:03.054585 23196 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 23:31:03.055816 23196 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-3852/kubeconfig
I1101 23:31:03.057038 23196 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3852/.minikube
I1101 23:31:03.058338 23196 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 23:31:03.060024 23196 config.go:180] Loaded profile config "multinode-230145": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1101 23:31:03.060079 23196 driver.go:365] Setting default libvirt URI to qemu:///system
I1101 23:31:03.095096 23196 out.go:177] * Using the kvm2 driver based on user configuration
I1101 23:31:03.096292 23196 start.go:282] selected driver: kvm2
I1101 23:31:03.096304 23196 start.go:808] validating driver "kvm2" against <nil>
I1101 23:31:03.096317 23196 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 23:31:03.096641 23196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 23:31:03.096896 23196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15232-3852/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1101 23:31:03.111110 23196 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.27.1
I1101 23:31:03.111150 23196 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1101 23:31:03.111652 23196 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
I1101 23:31:03.111758 23196 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
I1101 23:31:03.111769 23196 cni.go:95] Creating CNI manager for ""
I1101 23:31:03.111774 23196 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1101 23:31:03.111779 23196 start_flags.go:317] config:
{Name:multinode-230145-m03 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-230145-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:31:03.111860 23196 iso.go:124] acquiring lock: {Name:mk93232507d11fe0845a763ac9a9cca8a262da71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 23:31:03.113489 23196 out.go:177] * Starting control plane node multinode-230145-m03 in cluster multinode-230145-m03
I1101 23:31:03.114622 23196 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1101 23:31:03.114639 23196 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-3852/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I1101 23:31:03.114643 23196 cache.go:57] Caching tarball of preloaded images
I1101 23:31:03.114740 23196 preload.go:174] Found /home/jenkins/minikube-integration/15232-3852/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1101 23:31:03.114755 23196 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I1101 23:31:03.114867 23196 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/config.json ...
I1101 23:31:03.114879 23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/config.json: {Name:mk394098fc1305562f63a75d8759af0a3e890434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:31:03.115017 23196 cache.go:208] Successfully downloaded all kic artifacts
I1101 23:31:03.115030 23196 start.go:364] acquiring machines lock for multinode-230145-m03: {Name:mkf5a21f1745a6632babaddd1bb1f6424ebdc590 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1101 23:31:03.115069 23196 start.go:368] acquired machines lock for "multinode-230145-m03" in 32.42µs
I1101 23:31:03.115077 23196 start.go:93] Provisioning new machine with config: &{Name:multinode-230145-m03 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-230145-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1101 23:31:03.115120 23196 start.go:125] createHost starting for "" (driver="kvm2")
I1101 23:31:03.117451 23196 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
I1101 23:31:03.117567 23196 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1101 23:31:03.117603 23196 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:31:03.130606 23196 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34465
I1101 23:31:03.130956 23196 main.go:134] libmachine: () Calling .GetVersion
I1101 23:31:03.131508 23196 main.go:134] libmachine: Using API Version 1
I1101 23:31:03.131526 23196 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:31:03.131831 23196 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:31:03.132018 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetMachineName
I1101 23:31:03.132126 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:03.132242 23196 start.go:159] libmachine.API.Create for "multinode-230145-m03" (driver="kvm2")
I1101 23:31:03.132259 23196 client.go:168] LocalClient.Create starting
I1101 23:31:03.132278 23196 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca.pem
I1101 23:31:03.132297 23196 main.go:134] libmachine: Decoding PEM data...
I1101 23:31:03.132307 23196 main.go:134] libmachine: Parsing certificate...
I1101 23:31:03.132347 23196 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-3852/.minikube/certs/cert.pem
I1101 23:31:03.132358 23196 main.go:134] libmachine: Decoding PEM data...
I1101 23:31:03.132365 23196 main.go:134] libmachine: Parsing certificate...
I1101 23:31:03.132378 23196 main.go:134] libmachine: Running pre-create checks...
I1101 23:31:03.132383 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .PreCreateCheck
I1101 23:31:03.132648 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetConfigRaw
I1101 23:31:03.132966 23196 main.go:134] libmachine: Creating machine...
I1101 23:31:03.132973 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .Create
I1101 23:31:03.133070 23196 main.go:134] libmachine: (multinode-230145-m03) Creating KVM machine...
I1101 23:31:03.134185 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found existing default KVM network
I1101 23:31:03.135224 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:03.135088 23219 network.go:246] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d4:91:91}}
I1101 23:31:03.136136 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:03.136059 23219 network.go:295] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc00012a830] misses:0}
I1101 23:31:03.136163 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:03.136103 23219 network.go:241] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1101 23:31:03.140788 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | trying to create private KVM network mk-multinode-230145-m03 192.168.50.0/24...
I1101 23:31:03.208578 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | private KVM network mk-multinode-230145-m03 192.168.50.0/24 created
I1101 23:31:03.208597 23196 main.go:134] libmachine: (multinode-230145-m03) Setting up store path in /home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03 ...
I1101 23:31:03.208616 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:03.208555 23219 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15232-3852/.minikube
I1101 23:31:03.208637 23196 main.go:134] libmachine: (multinode-230145-m03) Building disk image from file:///home/jenkins/minikube-integration/15232-3852/.minikube/cache/iso/amd64/minikube-v1.27.0-1666976405-15232-amd64.iso
I1101 23:31:03.208711 23196 main.go:134] libmachine: (multinode-230145-m03) Downloading /home/jenkins/minikube-integration/15232-3852/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15232-3852/.minikube/cache/iso/amd64/minikube-v1.27.0-1666976405-15232-amd64.iso...
I1101 23:31:03.409591 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:03.409478 23219 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/id_rsa...
I1101 23:31:03.566101 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:03.565988 23219 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/multinode-230145-m03.rawdisk...
I1101 23:31:03.566123 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Writing magic tar header
I1101 23:31:03.566134 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Writing SSH key tar header
I1101 23:31:03.566156 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:03.566110 23219 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03 ...
I1101 23:31:03.566234 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03
I1101 23:31:03.566261 23196 main.go:134] libmachine: (multinode-230145-m03) Setting executable bit set on /home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03 (perms=drwx------)
I1101 23:31:03.566268 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15232-3852/.minikube/machines
I1101 23:31:03.566276 23196 main.go:134] libmachine: (multinode-230145-m03) Setting executable bit set on /home/jenkins/minikube-integration/15232-3852/.minikube/machines (perms=drwxrwxr-x)
I1101 23:31:03.566287 23196 main.go:134] libmachine: (multinode-230145-m03) Setting executable bit set on /home/jenkins/minikube-integration/15232-3852/.minikube (perms=drwxr-xr-x)
I1101 23:31:03.566293 23196 main.go:134] libmachine: (multinode-230145-m03) Setting executable bit set on /home/jenkins/minikube-integration/15232-3852 (perms=drwxrwxr-x)
I1101 23:31:03.566299 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15232-3852/.minikube
I1101 23:31:03.566308 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15232-3852
I1101 23:31:03.566314 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I1101 23:31:03.566320 23196 main.go:134] libmachine: (multinode-230145-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1101 23:31:03.566328 23196 main.go:134] libmachine: (multinode-230145-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1101 23:31:03.566332 23196 main.go:134] libmachine: (multinode-230145-m03) Creating domain...
I1101 23:31:03.566339 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Checking permissions on dir: /home/jenkins
I1101 23:31:03.566344 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Checking permissions on dir: /home
I1101 23:31:03.566350 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Skipping /home - not owner
I1101 23:31:03.567480 23196 main.go:134] libmachine: (multinode-230145-m03) define libvirt domain using xml:
I1101 23:31:03.567506 23196 main.go:134] libmachine: (multinode-230145-m03) <domain type='kvm'>
I1101 23:31:03.567518 23196 main.go:134] libmachine: (multinode-230145-m03) <name>multinode-230145-m03</name>
I1101 23:31:03.567526 23196 main.go:134] libmachine: (multinode-230145-m03) <memory unit='MiB'>6000</memory>
I1101 23:31:03.567535 23196 main.go:134] libmachine: (multinode-230145-m03) <vcpu>2</vcpu>
I1101 23:31:03.567542 23196 main.go:134] libmachine: (multinode-230145-m03) <features>
I1101 23:31:03.567550 23196 main.go:134] libmachine: (multinode-230145-m03) <acpi/>
I1101 23:31:03.567558 23196 main.go:134] libmachine: (multinode-230145-m03) <apic/>
I1101 23:31:03.567566 23196 main.go:134] libmachine: (multinode-230145-m03) <pae/>
I1101 23:31:03.567573 23196 main.go:134] libmachine: (multinode-230145-m03)
I1101 23:31:03.567578 23196 main.go:134] libmachine: (multinode-230145-m03) </features>
I1101 23:31:03.567585 23196 main.go:134] libmachine: (multinode-230145-m03) <cpu mode='host-passthrough'>
I1101 23:31:03.567616 23196 main.go:134] libmachine: (multinode-230145-m03)
I1101 23:31:03.567635 23196 main.go:134] libmachine: (multinode-230145-m03) </cpu>
I1101 23:31:03.567645 23196 main.go:134] libmachine: (multinode-230145-m03) <os>
I1101 23:31:03.567653 23196 main.go:134] libmachine: (multinode-230145-m03) <type>hvm</type>
I1101 23:31:03.567662 23196 main.go:134] libmachine: (multinode-230145-m03) <boot dev='cdrom'/>
I1101 23:31:03.567669 23196 main.go:134] libmachine: (multinode-230145-m03) <boot dev='hd'/>
I1101 23:31:03.567678 23196 main.go:134] libmachine: (multinode-230145-m03) <bootmenu enable='no'/>
I1101 23:31:03.567684 23196 main.go:134] libmachine: (multinode-230145-m03) </os>
I1101 23:31:03.567692 23196 main.go:134] libmachine: (multinode-230145-m03) <devices>
I1101 23:31:03.567706 23196 main.go:134] libmachine: (multinode-230145-m03) <disk type='file' device='cdrom'>
I1101 23:31:03.567720 23196 main.go:134] libmachine: (multinode-230145-m03) <source file='/home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/boot2docker.iso'/>
I1101 23:31:03.567732 23196 main.go:134] libmachine: (multinode-230145-m03) <target dev='hdc' bus='scsi'/>
I1101 23:31:03.567741 23196 main.go:134] libmachine: (multinode-230145-m03) <readonly/>
I1101 23:31:03.567748 23196 main.go:134] libmachine: (multinode-230145-m03) </disk>
I1101 23:31:03.567757 23196 main.go:134] libmachine: (multinode-230145-m03) <disk type='file' device='disk'>
I1101 23:31:03.567765 23196 main.go:134] libmachine: (multinode-230145-m03) <driver name='qemu' type='raw' cache='default' io='threads' />
I1101 23:31:03.567784 23196 main.go:134] libmachine: (multinode-230145-m03) <source file='/home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/multinode-230145-m03.rawdisk'/>
I1101 23:31:03.567796 23196 main.go:134] libmachine: (multinode-230145-m03) <target dev='hda' bus='virtio'/>
I1101 23:31:03.567805 23196 main.go:134] libmachine: (multinode-230145-m03) </disk>
I1101 23:31:03.567811 23196 main.go:134] libmachine: (multinode-230145-m03) <interface type='network'>
I1101 23:31:03.567817 23196 main.go:134] libmachine: (multinode-230145-m03) <source network='mk-multinode-230145-m03'/>
I1101 23:31:03.567822 23196 main.go:134] libmachine: (multinode-230145-m03) <model type='virtio'/>
I1101 23:31:03.567827 23196 main.go:134] libmachine: (multinode-230145-m03) </interface>
I1101 23:31:03.567831 23196 main.go:134] libmachine: (multinode-230145-m03) <interface type='network'>
I1101 23:31:03.567838 23196 main.go:134] libmachine: (multinode-230145-m03) <source network='default'/>
I1101 23:31:03.567842 23196 main.go:134] libmachine: (multinode-230145-m03) <model type='virtio'/>
I1101 23:31:03.567847 23196 main.go:134] libmachine: (multinode-230145-m03) </interface>
I1101 23:31:03.567860 23196 main.go:134] libmachine: (multinode-230145-m03) <serial type='pty'>
I1101 23:31:03.567865 23196 main.go:134] libmachine: (multinode-230145-m03) <target port='0'/>
I1101 23:31:03.567869 23196 main.go:134] libmachine: (multinode-230145-m03) </serial>
I1101 23:31:03.567874 23196 main.go:134] libmachine: (multinode-230145-m03) <console type='pty'>
I1101 23:31:03.567879 23196 main.go:134] libmachine: (multinode-230145-m03) <target type='serial' port='0'/>
I1101 23:31:03.567884 23196 main.go:134] libmachine: (multinode-230145-m03) </console>
I1101 23:31:03.567888 23196 main.go:134] libmachine: (multinode-230145-m03) <rng model='virtio'>
I1101 23:31:03.567894 23196 main.go:134] libmachine: (multinode-230145-m03) <backend model='random'>/dev/random</backend>
I1101 23:31:03.567903 23196 main.go:134] libmachine: (multinode-230145-m03) </rng>
I1101 23:31:03.567911 23196 main.go:134] libmachine: (multinode-230145-m03)
I1101 23:31:03.567920 23196 main.go:134] libmachine: (multinode-230145-m03)
I1101 23:31:03.567925 23196 main.go:134] libmachine: (multinode-230145-m03) </devices>
I1101 23:31:03.567929 23196 main.go:134] libmachine: (multinode-230145-m03) </domain>
I1101 23:31:03.567936 23196 main.go:134] libmachine: (multinode-230145-m03)
I1101 23:31:03.572089 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:b9:03:25 in network default
I1101 23:31:03.572618 23196 main.go:134] libmachine: (multinode-230145-m03) Ensuring networks are active...
I1101 23:31:03.572632 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:03.573220 23196 main.go:134] libmachine: (multinode-230145-m03) Ensuring network default is active
I1101 23:31:03.573533 23196 main.go:134] libmachine: (multinode-230145-m03) Ensuring network mk-multinode-230145-m03 is active
I1101 23:31:03.574112 23196 main.go:134] libmachine: (multinode-230145-m03) Getting domain xml...
I1101 23:31:03.574783 23196 main.go:134] libmachine: (multinode-230145-m03) Creating domain...
I1101 23:31:04.780361 23196 main.go:134] libmachine: (multinode-230145-m03) Waiting to get IP...
I1101 23:31:04.781067 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:04.781441 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:04.781466 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:04.781425 23219 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
I1101 23:31:05.045730 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:05.046191 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:05.046210 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:05.046147 23219 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
I1101 23:31:05.429494 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:05.429908 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:05.429935 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:05.429855 23219 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
I1101 23:31:05.854405 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:05.854878 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:05.854909 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:05.854831 23219 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
I1101 23:31:06.329308 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:06.329681 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:06.329702 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:06.329632 23219 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
I1101 23:31:06.918256 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:06.918705 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:06.918732 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:06.918643 23219 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
I1101 23:31:07.754539 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:07.754945 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:07.754973 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:07.754883 23219 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
I1101 23:31:08.503299 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:08.503781 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:08.503793 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:08.503728 23219 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
I1101 23:31:09.492374 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:09.492762 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:09.492792 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:09.492729 23219 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
I1101 23:31:10.683962 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:10.684412 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:10.684438 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:10.684344 23219 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
I1101 23:31:12.363256 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:12.363709 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:12.363725 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:12.363675 23219 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
I1101 23:31:14.711036 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:14.711521 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:14.711571 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:14.711499 23219 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
I1101 23:31:18.080308 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:18.080752 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:18.080777 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:18.080700 23219 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
I1101 23:31:21.200511 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:21.200879 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find current IP address of domain multinode-230145-m03 in network mk-multinode-230145-m03
I1101 23:31:21.200903 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | I1101 23:31:21.200814 23219 retry.go:31] will retry after 4.276119362s: waiting for machine to come up
I1101 23:31:25.479592 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.479993 23196 main.go:134] libmachine: (multinode-230145-m03) Found IP for machine: 192.168.50.221
I1101 23:31:25.480017 23196 main.go:134] libmachine: (multinode-230145-m03) Reserving static IP address...
I1101 23:31:25.480033 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has current primary IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.480359 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | unable to find host DHCP lease matching {name: "multinode-230145-m03", mac: "52:54:00:a1:64:f0", ip: "192.168.50.221"} in network mk-multinode-230145-m03
I1101 23:31:25.548853 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Getting to WaitForSSH function...
I1101 23:31:25.548875 23196 main.go:134] libmachine: (multinode-230145-m03) Reserved static IP address: 192.168.50.221
I1101 23:31:25.548890 23196 main.go:134] libmachine: (multinode-230145-m03) Waiting for SSH to be available...
I1101 23:31:25.551514 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.551975 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:25.552004 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.552153 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Using SSH client type: external
I1101 23:31:25.552176 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/id_rsa (-rw-------)
I1101 23:31:25.552208 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I1101 23:31:25.552216 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | About to run SSH command:
I1101 23:31:25.552223 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | exit 0
I1101 23:31:25.646917 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | SSH cmd err, output: <nil>:
I1101 23:31:25.647190 23196 main.go:134] libmachine: (multinode-230145-m03) KVM machine creation complete!
I1101 23:31:25.647466 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetConfigRaw
I1101 23:31:25.647979 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:25.648118 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:25.648274 23196 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes...
I1101 23:31:25.648286 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetState
I1101 23:31:25.649557 23196 main.go:134] libmachine: Detecting operating system of created instance...
I1101 23:31:25.649564 23196 main.go:134] libmachine: Waiting for SSH to be available...
I1101 23:31:25.649569 23196 main.go:134] libmachine: Getting to WaitForSSH function...
I1101 23:31:25.649574 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:25.651919 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.652272 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:25.652298 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.652417 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:25.652595 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:25.652737 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:25.652844 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:25.652994 23196 main.go:134] libmachine: Using SSH client type: native
I1101 23:31:25.653138 23196 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I1101 23:31:25.653143 23196 main.go:134] libmachine: About to run SSH command:
exit 0
I1101 23:31:25.778057 23196 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1101 23:31:25.778072 23196 main.go:134] libmachine: Detecting the provisioner...
I1101 23:31:25.778077 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:25.780326 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.780607 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:25.780632 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.780754 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:25.780932 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:25.781086 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:25.781212 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:25.781325 23196 main.go:134] libmachine: Using SSH client type: native
I1101 23:31:25.781440 23196 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I1101 23:31:25.781448 23196 main.go:134] libmachine: About to run SSH command:
cat /etc/os-release
I1101 23:31:25.907397 23196 main.go:134] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-gb347f1c-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I1101 23:31:25.907484 23196 main.go:134] libmachine: found compatible host: buildroot
I1101 23:31:25.907493 23196 main.go:134] libmachine: Provisioning with buildroot...
I1101 23:31:25.907502 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetMachineName
I1101 23:31:25.907686 23196 buildroot.go:166] provisioning hostname "multinode-230145-m03"
I1101 23:31:25.907701 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetMachineName
I1101 23:31:25.907871 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:25.909874 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.910111 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:25.910135 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:25.910247 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:25.910406 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:25.910558 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:25.910715 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:25.910868 23196 main.go:134] libmachine: Using SSH client type: native
I1101 23:31:25.910987 23196 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I1101 23:31:25.910995 23196 main.go:134] libmachine: About to run SSH command:
sudo hostname multinode-230145-m03 && echo "multinode-230145-m03" | sudo tee /etc/hostname
I1101 23:31:26.046871 23196 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-230145-m03
I1101 23:31:26.046883 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:26.049411 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.049709 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:26.049734 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.049887 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:26.050057 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.050187 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.050335 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:26.050491 23196 main.go:134] libmachine: Using SSH client type: native
I1101 23:31:26.050606 23196 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I1101 23:31:26.050618 23196 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-230145-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-230145-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-230145-m03' | sudo tee -a /etc/hosts;
fi
fi
I1101 23:31:26.186845 23196 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1101 23:31:26.186865 23196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3852/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3852/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3852/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3852/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3852/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3852/.minikube}
I1101 23:31:26.186906 23196 buildroot.go:174] setting up certificates
I1101 23:31:26.186914 23196 provision.go:83] configureAuth start
I1101 23:31:26.186927 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetMachineName
I1101 23:31:26.187217 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetIP
I1101 23:31:26.190067 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.190432 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:26.190462 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.190611 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:26.192910 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.193360 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:26.193385 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.193491 23196 provision.go:138] copyHostCerts
I1101 23:31:26.193536 23196 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3852/.minikube/cert.pem, removing ...
I1101 23:31:26.193553 23196 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3852/.minikube/cert.pem
I1101 23:31:26.193631 23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3852/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3852/.minikube/cert.pem (1123 bytes)
I1101 23:31:26.193725 23196 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3852/.minikube/key.pem, removing ...
I1101 23:31:26.193729 23196 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3852/.minikube/key.pem
I1101 23:31:26.193760 23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3852/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3852/.minikube/key.pem (1675 bytes)
I1101 23:31:26.193800 23196 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3852/.minikube/ca.pem, removing ...
I1101 23:31:26.193803 23196 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3852/.minikube/ca.pem
I1101 23:31:26.193823 23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3852/.minikube/ca.pem (1078 bytes)
I1101 23:31:26.193873 23196 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3852/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca-key.pem org=jenkins.multinode-230145-m03 san=[192.168.50.221 192.168.50.221 localhost 127.0.0.1 minikube multinode-230145-m03]
I1101 23:31:26.258026 23196 provision.go:172] copyRemoteCerts
I1101 23:31:26.258053 23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 23:31:26.258073 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:26.260150 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.260407 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:26.260419 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.260565 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:26.260683 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.260789 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:26.260938 23196 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/id_rsa Username:docker}
I1101 23:31:26.352705 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1101 23:31:26.373777 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I1101 23:31:26.394101 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1101 23:31:26.414794 23196 provision.go:86] duration metric: configureAuth took 227.873121ms
I1101 23:31:26.414805 23196 buildroot.go:189] setting minikube options for container-runtime
I1101 23:31:26.414967 23196 config.go:180] Loaded profile config "multinode-230145-m03": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1101 23:31:26.414980 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:26.415204 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:26.417642 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.418011 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:26.418038 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.418183 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:26.418332 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.418468 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.418613 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:26.418767 23196 main.go:134] libmachine: Using SSH client type: native
I1101 23:31:26.418919 23196 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I1101 23:31:26.418929 23196 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1101 23:31:26.548792 23196 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I1101 23:31:26.548803 23196 buildroot.go:70] root file system type: tmpfs
I1101 23:31:26.548982 23196 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1101 23:31:26.548997 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:26.551292 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.551653 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:26.551670 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.551884 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:26.552028 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.552178 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.552340 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:26.552483 23196 main.go:134] libmachine: Using SSH client type: native
I1101 23:31:26.552591 23196 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I1101 23:31:26.552642 23196 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1101 23:31:26.691051 23196 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1101 23:31:26.691064 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:26.693418 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.693704 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:26.693729 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:26.693865 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:26.694041 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.694201 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:26.694335 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:26.694483 23196 main.go:134] libmachine: Using SSH client type: native
I1101 23:31:26.694587 23196 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I1101 23:31:26.694598 23196 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1101 23:31:27.424134 23196 main.go:134] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1101 23:31:27.424154 23196 main.go:134] libmachine: Checking connection to Docker...
I1101 23:31:27.424165 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetURL
I1101 23:31:27.425486 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | Using libvirt version 6000000
I1101 23:31:27.428103 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.428526 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:27.428554 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.428721 23196 main.go:134] libmachine: Docker is up and running!
I1101 23:31:27.428728 23196 main.go:134] libmachine: Reticulating splines...
I1101 23:31:27.428736 23196 client.go:171] LocalClient.Create took 24.296469409s
I1101 23:31:27.428752 23196 start.go:167] duration metric: libmachine.API.Create for "multinode-230145-m03" took 24.296510599s
I1101 23:31:27.428758 23196 start.go:300] post-start starting for "multinode-230145-m03" (driver="kvm2")
I1101 23:31:27.428762 23196 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 23:31:27.428773 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:27.429003 23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 23:31:27.429024 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:27.431286 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.431700 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:27.431717 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.431876 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:27.432031 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:27.432168 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:27.432268 23196 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/id_rsa Username:docker}
I1101 23:31:27.527822 23196 ssh_runner.go:195] Run: cat /etc/os-release
I1101 23:31:27.531691 23196 info.go:137] Remote host: Buildroot 2021.02.12
I1101 23:31:27.531709 23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3852/.minikube/addons for local assets ...
I1101 23:31:27.531748 23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3852/.minikube/files for local assets ...
I1101 23:31:27.531811 23196 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3852/.minikube/files/etc/ssl/certs/106442.pem -> 106442.pem in /etc/ssl/certs
I1101 23:31:27.531878 23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 23:31:27.539416 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/files/etc/ssl/certs/106442.pem --> /etc/ssl/certs/106442.pem (1708 bytes)
I1101 23:31:27.561088 23196 start.go:303] post-start completed in 132.324809ms
I1101 23:31:27.561113 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetConfigRaw
I1101 23:31:27.561567 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetIP
I1101 23:31:27.563813 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.564111 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:27.564144 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.564400 23196 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/config.json ...
I1101 23:31:27.564597 23196 start.go:128] duration metric: createHost completed in 24.44947052s
I1101 23:31:27.564612 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:27.566816 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.567124 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:27.567145 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.567256 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:27.567450 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:27.567603 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:27.567738 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:27.567845 23196 main.go:134] libmachine: Using SSH client type: native
I1101 23:31:27.567947 23196 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.50.221 22 <nil> <nil>}
I1101 23:31:27.567954 23196 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1101 23:31:27.691321 23196 main.go:134] libmachine: SSH cmd err, output: <nil>: 1667345487.661340446
I1101 23:31:27.691331 23196 fix.go:207] guest clock: 1667345487.661340446
I1101 23:31:27.691337 23196 fix.go:220] Guest: 2022-11-01 23:31:27.661340446 +0000 UTC Remote: 2022-11-01 23:31:27.564602837 +0000 UTC m=+24.576013056 (delta=96.737609ms)
I1101 23:31:27.691369 23196 fix.go:191] guest clock delta is within tolerance: 96.737609ms
I1101 23:31:27.691380 23196 start.go:83] releasing machines lock for "multinode-230145-m03", held for 24.576303591s
I1101 23:31:27.691416 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:27.691614 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetIP
I1101 23:31:27.693801 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.694103 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:27.694128 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.694254 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:27.694709 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:27.694858 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .DriverName
I1101 23:31:27.694939 23196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 23:31:27.694962 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:27.695054 23196 ssh_runner.go:195] Run: systemctl --version
I1101 23:31:27.695072 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHHostname
I1101 23:31:27.697375 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.697694 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:27.697704 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.697714 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.697833 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:27.697988 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:27.698109 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:27.698124 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:27.698126 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:27.698277 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHPort
I1101 23:31:27.698274 23196 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/id_rsa Username:docker}
I1101 23:31:27.698430 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHKeyPath
I1101 23:31:27.698574 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetSSHUsername
I1101 23:31:27.698720 23196 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3852/.minikube/machines/multinode-230145-m03/id_rsa Username:docker}
I1101 23:31:27.785009 23196 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1101 23:31:27.785090 23196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 23:31:27.811039 23196 docker.go:613] Got preloaded images:
I1101 23:31:27.811049 23196 docker.go:619] registry.k8s.io/kube-apiserver:v1.25.3 wasn't preloaded
I1101 23:31:27.811084 23196 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1101 23:31:27.819887 23196 ssh_runner.go:195] Run: which lz4
I1101 23:31:27.823305 23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I1101 23:31:27.827220 23196 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1101 23:31:27.827237 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (404166592 bytes)
I1101 23:31:29.466012 23196 docker.go:577] Took 1.642720 seconds to copy over tarball
I1101 23:31:29.466065 23196 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1101 23:31:32.035469 23196 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.56936255s)
I1101 23:31:32.035488 23196 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1101 23:31:32.082604 23196 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1101 23:31:32.093343 23196 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
I1101 23:31:32.110064 23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:31:32.210108 23196 ssh_runner.go:195] Run: sudo systemctl restart docker
I1101 23:31:35.628957 23196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.418824325s)
I1101 23:31:35.629034 23196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1101 23:31:35.647380 23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 23:31:35.660051 23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 23:31:35.670794 23196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1101 23:31:35.702550 23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 23:31:35.715281 23196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 23:31:35.732239 23196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1101 23:31:35.834523 23196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1101 23:31:35.936201 23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:31:36.038614 23196 ssh_runner.go:195] Run: sudo systemctl restart docker
I1101 23:31:37.390775 23196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.352133225s)
I1101 23:31:37.390823 23196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 23:31:37.492212 23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:31:37.591032 23196 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I1101 23:31:37.605747 23196 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1101 23:31:37.605781 23196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1101 23:31:37.611723 23196 start.go:472] Will wait 60s for crictl version
I1101 23:31:37.611757 23196 ssh_runner.go:195] Run: sudo crictl version
I1101 23:31:37.743431 23196 start.go:481] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.20
RuntimeApiVersion: 1.41.0
I1101 23:31:37.743493 23196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 23:31:37.774214 23196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 23:31:37.803038 23196 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
I1101 23:31:37.803083 23196 main.go:134] libmachine: (multinode-230145-m03) Calling .GetIP
I1101 23:31:37.806094 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:37.806432 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:64:f0", ip: ""} in network mk-multinode-230145-m03: {Iface:virbr2 ExpiryTime:2022-11-02 00:31:18 +0000 UTC Type:0 Mac:52:54:00:a1:64:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:multinode-230145-m03 Clientid:01:52:54:00:a1:64:f0}
I1101 23:31:37.806456 23196 main.go:134] libmachine: (multinode-230145-m03) DBG | domain multinode-230145-m03 has defined IP address 192.168.50.221 and MAC address 52:54:00:a1:64:f0 in network mk-multinode-230145-m03
I1101 23:31:37.806677 23196 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I1101 23:31:37.810488 23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 23:31:37.822460 23196 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1101 23:31:37.822516 23196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 23:31:37.844671 23196 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1101 23:31:37.844685 23196 docker.go:543] Images already preloaded, skipping extraction
I1101 23:31:37.844738 23196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 23:31:37.865180 23196 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1101 23:31:37.865191 23196 cache_images.go:84] Images are preloaded, skipping loading
I1101 23:31:37.865233 23196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1101 23:31:37.895040 23196 cni.go:95] Creating CNI manager for ""
I1101 23:31:37.895062 23196 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1101 23:31:37.895070 23196 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1101 23:31:37.895086 23196 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-230145-m03 NodeName:multinode-230145-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1101 23:31:37.895227 23196 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.221
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-230145-m03"
kubeletExtraArgs:
node-ip: 192.168.50.221
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 23:31:37.895312 23196 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-230145-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:multinode-230145-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1101 23:31:37.895361 23196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I1101 23:31:37.904695 23196 binaries.go:44] Found k8s binaries, skipping transfer
I1101 23:31:37.904739 23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 23:31:37.912605 23196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
I1101 23:31:37.927857 23196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 23:31:37.944074 23196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I1101 23:31:37.959264 23196 ssh_runner.go:195] Run: grep 192.168.50.221 control-plane.minikube.internal$ /etc/hosts
I1101 23:31:37.963018 23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 23:31:37.974463 23196 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03 for IP: 192.168.50.221
I1101 23:31:37.974569 23196 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3852/.minikube/ca.key
I1101 23:31:37.974620 23196 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3852/.minikube/proxy-client-ca.key
I1101 23:31:37.974667 23196 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/client.key
I1101 23:31:37.974677 23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/client.crt with IP's: []
I1101 23:31:38.018707 23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/client.crt ...
I1101 23:31:38.018715 23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/client.crt: {Name:mkdc5bddfb7d5a3bd99444ba939b3471c8b85de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:31:38.018900 23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/client.key ...
I1101 23:31:38.018906 23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/client.key: {Name:mk0504a338878f645b4b41ebda4b08fc96c46238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:31:38.019001 23196 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.key.edab0817
I1101 23:31:38.019009 23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.crt.edab0817 with IP's: [192.168.50.221 10.96.0.1 127.0.0.1 10.0.0.1]
I1101 23:31:38.097191 23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.crt.edab0817 ...
I1101 23:31:38.097200 23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.crt.edab0817: {Name:mkdd8140d0e346e4ee72c1617c238a8e5a63f380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:31:38.097365 23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.key.edab0817 ...
I1101 23:31:38.097371 23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.key.edab0817: {Name:mk0e547c49ccbcf3a8057498069eb6914093060f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:31:38.097477 23196 certs.go:320] copying /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.crt.edab0817 -> /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.crt
I1101 23:31:38.097533 23196 certs.go:324] copying /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.key.edab0817 -> /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.key
I1101 23:31:38.097576 23196 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/proxy-client.key
I1101 23:31:38.097584 23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/proxy-client.crt with IP's: []
I1101 23:31:38.300915 23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/proxy-client.crt ...
I1101 23:31:38.300927 23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/proxy-client.crt: {Name:mkbf7c4f2cbee0c2357d40cd9d4dc809cb30766f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:31:38.301108 23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/proxy-client.key ...
I1101 23:31:38.301116 23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/proxy-client.key: {Name:mk302b4f57fe994c2a0755a5105b41a3a8eff48f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:31:38.301280 23196 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3852/.minikube/certs/home/jenkins/minikube-integration/15232-3852/.minikube/certs/10644.pem (1338 bytes)
W1101 23:31:38.301310 23196 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3852/.minikube/certs/home/jenkins/minikube-integration/15232-3852/.minikube/certs/10644_empty.pem, impossibly tiny 0 bytes
I1101 23:31:38.301316 23196 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3852/.minikube/certs/home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca-key.pem (1679 bytes)
I1101 23:31:38.301335 23196 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3852/.minikube/certs/home/jenkins/minikube-integration/15232-3852/.minikube/certs/ca.pem (1078 bytes)
I1101 23:31:38.301354 23196 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3852/.minikube/certs/home/jenkins/minikube-integration/15232-3852/.minikube/certs/cert.pem (1123 bytes)
I1101 23:31:38.301372 23196 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3852/.minikube/certs/home/jenkins/minikube-integration/15232-3852/.minikube/certs/key.pem (1675 bytes)
I1101 23:31:38.301404 23196 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3852/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3852/.minikube/files/etc/ssl/certs/106442.pem (1708 bytes)
I1101 23:31:38.301940 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1101 23:31:38.325303 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1101 23:31:38.348160 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 23:31:38.370851 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/profiles/multinode-230145-m03/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1101 23:31:38.393581 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 23:31:38.415139 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1101 23:31:38.437263 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 23:31:38.458607 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1101 23:31:38.480722 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/certs/10644.pem --> /usr/share/ca-certificates/10644.pem (1338 bytes)
I1101 23:31:38.502421 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/files/etc/ssl/certs/106442.pem --> /usr/share/ca-certificates/106442.pem (1708 bytes)
I1101 23:31:38.524522 23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3852/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 23:31:38.545582 23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 23:31:38.560691 23196 ssh_runner.go:195] Run: openssl version
I1101 23:31:38.566144 23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644.pem && ln -fs /usr/share/ca-certificates/10644.pem /etc/ssl/certs/10644.pem"
I1101 23:31:38.575030 23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644.pem
I1101 23:31:38.579244 23196 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 1 22:50 /usr/share/ca-certificates/10644.pem
I1101 23:31:38.579273 23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644.pem
I1101 23:31:38.584435 23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10644.pem /etc/ssl/certs/51391683.0"
I1101 23:31:38.593635 23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106442.pem && ln -fs /usr/share/ca-certificates/106442.pem /etc/ssl/certs/106442.pem"
I1101 23:31:38.602994 23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106442.pem
I1101 23:31:38.607204 23196 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 1 22:50 /usr/share/ca-certificates/106442.pem
I1101 23:31:38.607230 23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106442.pem
I1101 23:31:38.612247 23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/106442.pem /etc/ssl/certs/3ec20f2e.0"
I1101 23:31:38.621659 23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 23:31:38.630687 23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 23:31:38.635522 23196 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 1 22:45 /usr/share/ca-certificates/minikubeCA.pem
I1101 23:31:38.635555 23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 23:31:38.640786 23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 23:31:38.649557 23196 kubeadm.go:396] StartCluster: {Name:multinode-230145-m03 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.25.3 ClusterName:multinode-230145-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:31:38.649656 23196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1101 23:31:38.672574 23196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 23:31:38.680372 23196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 23:31:38.688055 23196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:31:38.695806 23196 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 23:31:38.695832 23196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
I1101 23:31:38.738621 23196 kubeadm.go:317] W1101 23:31:38.722123 1275 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I1101 23:31:38.872172 23196 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
*
* ==> Docker <==
* -- Journal begins at Tue 2022-11-01 23:21:30 UTC, ends at Tue 2022-11-01 23:31:46 UTC. --
Nov 01 23:21:50 multinode-230145 dockerd[886]: time="2022-11-01T23:21:50.126366830Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c27213ce582dd633ae556f891a2a9e5c9b749359c7a6a725996a94b427edf636 pid=1724 runtime=io.containerd.runc.v2
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.279872412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.279951574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.279963176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.280159013Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bdfe551eb3a6eb304f982dd09838a9d2284ade6ec9c063643800502c7ab1c41c pid=2065 runtime=io.containerd.runc.v2
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.591384600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.591631873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.591657050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.591978714Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c6b3855c10f56f43a831cdf3ca2524e4f4757f79bfb8996baf8b95b56a7476a9 pid=2101 runtime=io.containerd.runc.v2
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.791630342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.791708368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.791720109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.792148503Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/18a780479003c9e2c277db912bdc2d5fa43db0d54a98020ff0b13141c056862f pid=2147 runtime=io.containerd.runc.v2
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.887941967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.888246867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.888261256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 23:21:58 multinode-230145 dockerd[886]: time="2022-11-01T23:21:58.888576189Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/924b76c6c0da527cf55c21f8e454325df963ad43bd2c94a568d8ffcc99c4f3af pid=2186 runtime=io.containerd.runc.v2
Nov 01 23:21:59 multinode-230145 dockerd[886]: time="2022-11-01T23:21:59.593961711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 23:21:59 multinode-230145 dockerd[886]: time="2022-11-01T23:21:59.594419085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 23:21:59 multinode-230145 dockerd[886]: time="2022-11-01T23:21:59.594574442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 23:21:59 multinode-230145 dockerd[886]: time="2022-11-01T23:21:59.594883370Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bb4a76c8496d4e2dd15bc4ea9d3a2a81a4a6396d02592f63d144e0142db9eb08 pid=2344 runtime=io.containerd.runc.v2
Nov 01 23:22:01 multinode-230145 dockerd[886]: time="2022-11-01T23:22:01.852413637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 23:22:01 multinode-230145 dockerd[886]: time="2022-11-01T23:22:01.852496042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 23:22:01 multinode-230145 dockerd[886]: time="2022-11-01T23:22:01.852535802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 23:22:01 multinode-230145 dockerd[886]: time="2022-11-01T23:22:01.853027830Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/88977904adbacc4761723a1d24c84b2b333f06e4319e3443c754ce4a07cf0b77 pid=2399 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
88977904adbac d6e3e26021b60 9 minutes ago Running kindnet-cni 2 bdfe551eb3a6e
bb4a76c8496d4 6e38f40d628db 9 minutes ago Running storage-provisioner 3 924b76c6c0da5
18a780479003c beaaf00edd38a 9 minutes ago Running kube-proxy 2 c6b3855c10f56
c27213ce582dd a8a176a5d5d69 9 minutes ago Running etcd 2 10e2a1270d3d5
f2267f3af3209 6d23ec0e8b87e 9 minutes ago Running kube-scheduler 2 c6a94d760bcde
0bfadecc8e445 6039992312758 9 minutes ago Running kube-controller-manager 2 9f95452386167
6ab1e824fe380 0346dbd74bcb9 9 minutes ago Running kube-apiserver 2 1ac2f284bee9b
60aa48a1379bd 6e38f40d628db 23 minutes ago Exited storage-provisioner 2 f4435c49ca4df
89d5479cc94f7 d6e3e26021b60 24 minutes ago Exited kindnet-cni 1 715a9a69c0a5a
5c4ab848a437f beaaf00edd38a 24 minutes ago Exited kube-proxy 1 ce3a7f6506c9e
9b38a0defca0d 6039992312758 24 minutes ago Exited kube-controller-manager 1 9bfe29fd72e47
598c7f6bc9aa3 a8a176a5d5d69 24 minutes ago Exited etcd 1 b4c55a217563d
b4f48c898e43b 0346dbd74bcb9 24 minutes ago Exited kube-apiserver 1 9d84d0e142796
e48288e1f6fe7 6d23ec0e8b87e 24 minutes ago Exited kube-scheduler 1 bda4214a12eb3
c88c637cf3dd7 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 27 minutes ago Exited busybox 0 4173c8da0a2ed
bbe084e6b4829 5185b96f0becf 28 minutes ago Exited coredns 0 d9b09c2c3dde9
*
* ==> coredns [bbe084e6b482] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: multinode-230145
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-230145
kubernetes.io/os=linux
minikube.k8s.io/commit=65bfd3dc2bf9824cf305579b01895f56b2ba9210
minikube.k8s.io/name=multinode-230145
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_11_01T23_02_38_0700
minikube.k8s.io/version=v1.27.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 01 Nov 2022 23:02:34 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-230145
AcquireTime: <unset>
RenewTime: Tue, 01 Nov 2022 23:31:36 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 01 Nov 2022 23:27:20 +0000 Tue, 01 Nov 2022 23:02:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 01 Nov 2022 23:27:20 +0000 Tue, 01 Nov 2022 23:02:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 01 Nov 2022 23:27:20 +0000 Tue, 01 Nov 2022 23:02:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 01 Nov 2022 23:27:20 +0000 Tue, 01 Nov 2022 23:22:14 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.139
Hostname: multinode-230145
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: c680078fce814fa98afd490b29acfe24
System UUID: c680078f-ce81-4fa9-8afd-490b29acfe24
Boot ID: 89a223a5-3e2a-4cea-aecf-62fcf1423d0a
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-65db55d5d6-w5f9t 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system coredns-565d847f94-ws9nv 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 28m
kube-system etcd-multinode-230145 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kindnet-t56sf 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 28m
kube-system kube-apiserver-multinode-230145 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-controller-manager-multinode-230145 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system kube-proxy-wdb5v 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
kube-system kube-scheduler-multinode-230145 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 28m kube-proxy
Normal Starting 9m47s kube-proxy
Normal Starting 24m kube-proxy
Normal Starting 29m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 29m (x4 over 29m) kubelet Node multinode-230145 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29m (x4 over 29m) kubelet Node multinode-230145 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29m (x4 over 29m) kubelet Node multinode-230145 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods
Normal Starting 29m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 29m kubelet Node multinode-230145 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29m kubelet Node multinode-230145 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29m kubelet Node multinode-230145 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 28m node-controller Node multinode-230145 event: Registered Node multinode-230145 in Controller
Normal NodeReady 28m kubelet Node multinode-230145 status is now: NodeReady
Normal NodeAllocatableEnforced 24m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 24m (x8 over 24m) kubelet Node multinode-230145 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24m (x8 over 24m) kubelet Node multinode-230145 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24m (x7 over 24m) kubelet Node multinode-230145 status is now: NodeHasSufficientPID
Normal Starting 24m kubelet Starting kubelet.
Normal RegisteredNode 24m node-controller Node multinode-230145 event: Registered Node multinode-230145 in Controller
Normal Starting 9m59s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9m59s (x8 over 9m59s) kubelet Node multinode-230145 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m59s (x8 over 9m59s) kubelet Node multinode-230145 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m59s (x7 over 9m59s) kubelet Node multinode-230145 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9m59s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 9m40s node-controller Node multinode-230145 event: Registered Node multinode-230145 in Controller
Name: multinode-230145-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-230145-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 01 Nov 2022 23:26:50 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-230145-m02
AcquireTime: <unset>
RenewTime: Tue, 01 Nov 2022 23:31:36 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 01 Nov 2022 23:27:00 +0000 Tue, 01 Nov 2022 23:26:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 01 Nov 2022 23:27:00 +0000 Tue, 01 Nov 2022 23:26:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 01 Nov 2022 23:27:00 +0000 Tue, 01 Nov 2022 23:26:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 01 Nov 2022 23:27:00 +0000 Tue, 01 Nov 2022 23:27:00 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.8
Hostname: multinode-230145-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 97586882e0f24d30be33c52589ef9840
System UUID: 97586882-e0f2-4d30-be33-c52589ef9840
Boot ID: ab41a1e0-5bed-41cc-9aab-e791bf00e58a
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-65db55d5d6-rks76 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
kube-system kindnet-5wmdc 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 27m
kube-system kube-proxy-7qp72 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 19m kube-proxy
Normal Starting 27m kube-proxy
Normal Starting 4m53s kube-proxy
Normal NodeHasNoDiskPressure 27m (x8 over 27m) kubelet Node multinode-230145-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 27m (x8 over 27m) kubelet Node multinode-230145-m02 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientMemory 19m (x2 over 19m) kubelet Node multinode-230145-m02 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 19m (x2 over 19m) kubelet Node multinode-230145-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 19m (x2 over 19m) kubelet Node multinode-230145-m02 status is now: NodeHasNoDiskPressure
Normal Starting 19m kubelet Starting kubelet.
Normal NodeReady 19m kubelet Node multinode-230145-m02 status is now: NodeReady
Normal Starting 4m56s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m56s (x2 over 4m56s) kubelet Node multinode-230145-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m56s (x2 over 4m56s) kubelet Node multinode-230145-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m56s (x2 over 4m56s) kubelet Node multinode-230145-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m56s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m46s kubelet Node multinode-230145-m02 status is now: NodeReady
*
* ==> dmesg <==
* [Nov 1 23:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.071605] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.823308] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.266690] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.147303] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.595547] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +5.503449] systemd-fstab-generator[511]: Ignoring "noauto" for root device
[ +0.101274] systemd-fstab-generator[522]: Ignoring "noauto" for root device
[ +1.105324] systemd-fstab-generator[794]: Ignoring "noauto" for root device
[ +0.296032] systemd-fstab-generator[847]: Ignoring "noauto" for root device
[ +0.100667] systemd-fstab-generator[858]: Ignoring "noauto" for root device
[ +0.103275] systemd-fstab-generator[869]: Ignoring "noauto" for root device
[ +1.623244] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
[ +0.096149] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
[ +5.016452] systemd-fstab-generator[1258]: Ignoring "noauto" for root device
[ +0.363789] kauditd_printk_skb: 67 callbacks suppressed
[ +11.902494] kauditd_printk_skb: 8 callbacks suppressed
*
* ==> etcd [598c7f6bc9aa] <==
* {"level":"info","ts":"2022-11-01T23:07:16.153Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-01T23:07:16.153Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-01T23:07:16.163Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-11-01T23:07:16.166Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3cbdd43a8949db2d","initial-advertise-peer-urls":["https://192.168.39.139:2380"],"listen-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-11-01T23:07:16.166Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-11-01T23:07:16.167Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.139:2380"}
{"level":"info","ts":"2022-11-01T23:07:16.167Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.139:2380"}
{"level":"info","ts":"2022-11-01T23:07:17.215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d is starting a new election at term 2"}
{"level":"info","ts":"2022-11-01T23:07:17.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became pre-candidate at term 2"}
{"level":"info","ts":"2022-11-01T23:07:17.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgPreVoteResp from 3cbdd43a8949db2d at term 2"}
{"level":"info","ts":"2022-11-01T23:07:17.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became candidate at term 3"}
{"level":"info","ts":"2022-11-01T23:07:17.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgVoteResp from 3cbdd43a8949db2d at term 3"}
{"level":"info","ts":"2022-11-01T23:07:17.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became leader at term 3"}
{"level":"info","ts":"2022-11-01T23:07:17.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3cbdd43a8949db2d elected leader 3cbdd43a8949db2d at term 3"}
{"level":"info","ts":"2022-11-01T23:07:17.217Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3cbdd43a8949db2d","local-member-attributes":"{Name:multinode-230145 ClientURLs:[https://192.168.39.139:2379]}","request-path":"/0/members/3cbdd43a8949db2d/attributes","cluster-id":"4af51893258ecb17","publish-timeout":"7s"}
{"level":"info","ts":"2022-11-01T23:07:17.218Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-01T23:07:17.219Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.139:2379"}
{"level":"info","ts":"2022-11-01T23:07:17.220Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-01T23:07:17.221Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-11-01T23:07:17.221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-11-01T23:07:17.221Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-11-01T23:17:17.248Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1131}
{"level":"info","ts":"2022-11-01T23:17:17.271Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1131,"took":"21.039986ms"}
{"level":"info","ts":"2022-11-01T23:21:13.473Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-11-01T23:21:13.474Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-230145","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"]}
*
* ==> etcd [c27213ce582d] <==
* {"level":"info","ts":"2022-11-01T23:21:51.130Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"3cbdd43a8949db2d","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-11-01T23:21:51.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d switched to configuration voters=(4376887760750500653)"}
{"level":"info","ts":"2022-11-01T23:21:51.138Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","added-peer-id":"3cbdd43a8949db2d","added-peer-peer-urls":["https://192.168.39.139:2380"]}
{"level":"info","ts":"2022-11-01T23:21:51.139Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-11-01T23:21:51.140Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3cbdd43a8949db2d","initial-advertise-peer-urls":["https://192.168.39.139:2380"],"listen-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-11-01T23:21:51.142Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-11-01T23:21:51.143Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"3cbdd43a8949db2d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2022-11-01T23:21:51.143Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-01T23:21:51.144Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-01T23:21:51.145Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.139:2380"}
{"level":"info","ts":"2022-11-01T23:21:51.145Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.139:2380"}
{"level":"info","ts":"2022-11-01T23:21:51.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d is starting a new election at term 3"}
{"level":"info","ts":"2022-11-01T23:21:51.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became pre-candidate at term 3"}
{"level":"info","ts":"2022-11-01T23:21:51.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgPreVoteResp from 3cbdd43a8949db2d at term 3"}
{"level":"info","ts":"2022-11-01T23:21:51.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became candidate at term 4"}
{"level":"info","ts":"2022-11-01T23:21:51.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgVoteResp from 3cbdd43a8949db2d at term 4"}
{"level":"info","ts":"2022-11-01T23:21:51.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became leader at term 4"}
{"level":"info","ts":"2022-11-01T23:21:51.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3cbdd43a8949db2d elected leader 3cbdd43a8949db2d at term 4"}
{"level":"info","ts":"2022-11-01T23:21:51.614Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3cbdd43a8949db2d","local-member-attributes":"{Name:multinode-230145 ClientURLs:[https://192.168.39.139:2379]}","request-path":"/0/members/3cbdd43a8949db2d/attributes","cluster-id":"4af51893258ecb17","publish-timeout":"7s"}
{"level":"info","ts":"2022-11-01T23:21:51.614Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-01T23:21:51.617Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-01T23:21:51.618Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-11-01T23:21:51.624Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.139:2379"}
{"level":"info","ts":"2022-11-01T23:21:51.626Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-11-01T23:21:51.626Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 23:31:46 up 10 min, 0 users, load average: 0.41, 0.21, 0.09
Linux multinode-230145 5.10.57 #1 SMP Fri Oct 28 21:02:11 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [6ab1e824fe38] <==
* I1101 23:21:54.330758 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1101 23:21:54.330773 1 crd_finalizer.go:266] Starting CRDFinalizer
I1101 23:21:54.327615 1 controller.go:85] Starting OpenAPI controller
I1101 23:21:54.296998 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1101 23:21:54.297145 1 controller.go:83] Starting OpenAPI AggregationController
I1101 23:21:54.297528 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1101 23:21:54.297544 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I1101 23:21:54.327632 1 controller.go:85] Starting OpenAPI V3 controller
I1101 23:21:54.402874 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1101 23:21:54.407076 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1101 23:21:54.430554 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1101 23:21:54.431274 1 shared_informer.go:262] Caches are synced for node_authorizer
E1101 23:21:54.439410 1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1101 23:21:54.491866 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I1101 23:21:54.492050 1 cache.go:39] Caches are synced for autoregister controller
I1101 23:21:54.492424 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1101 23:21:54.493870 1 apf_controller.go:305] Running API Priority and Fairness config worker
I1101 23:21:55.024048 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1101 23:21:55.298124 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1101 23:21:56.543143 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1101 23:21:56.660266 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I1101 23:21:56.683072 1 controller.go:616] quota admission added evaluator for: deployments.apps
I1101 23:21:56.734970 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1101 23:21:56.740621 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1101 23:22:17.243657 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [b4f48c898e43] <==
* I1101 23:07:19.609528 1 naming_controller.go:291] Starting NamingConditionController
I1101 23:07:19.609796 1 establishing_controller.go:76] Starting EstablishingController
I1101 23:07:19.610005 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1101 23:07:19.610232 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1101 23:07:19.610241 1 crd_finalizer.go:266] Starting CRDFinalizer
I1101 23:07:19.610322 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1101 23:07:19.610332 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1101 23:07:19.608517 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I1101 23:07:19.737718 1 shared_informer.go:262] Caches are synced for node_authorizer
I1101 23:07:19.746283 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I1101 23:07:19.806233 1 apf_controller.go:305] Running API Priority and Fairness config worker
I1101 23:07:19.807267 1 cache.go:39] Caches are synced for autoregister controller
I1101 23:07:19.807936 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1101 23:07:19.808446 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1101 23:07:19.814722 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1101 23:07:19.816141 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1101 23:07:20.381182 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1101 23:07:20.612280 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1101 23:07:22.373113 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1101 23:07:22.552459 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I1101 23:07:22.562780 1 controller.go:616] quota admission added evaluator for: deployments.apps
I1101 23:07:22.637667 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1101 23:07:22.645682 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1101 23:07:32.716048 1 controller.go:616] quota admission added evaluator for: endpoints
I1101 23:07:32.914343 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [0bfadecc8e44] <==
* I1101 23:22:06.893917 1 shared_informer.go:262] Caches are synced for disruption
I1101 23:22:06.922293 1 shared_informer.go:262] Caches are synced for namespace
I1101 23:22:06.955769 1 shared_informer.go:262] Caches are synced for resource quota
I1101 23:22:06.958044 1 shared_informer.go:262] Caches are synced for service account
I1101 23:22:06.964843 1 shared_informer.go:262] Caches are synced for resource quota
I1101 23:22:07.045300 1 shared_informer.go:262] Caches are synced for cronjob
I1101 23:22:07.366427 1 shared_informer.go:262] Caches are synced for garbage collector
I1101 23:22:07.366539 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1101 23:22:07.393702 1 shared_informer.go:262] Caches are synced for garbage collector
W1101 23:22:14.730567 1 topologycache.go:199] Can't get CPU or zone information for multinode-230145-m02 node
I1101 23:22:16.851570 1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94-ws9nv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-565d847f94-ws9nv"
I1101 23:22:16.852971 1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
I1101 23:22:16.853360 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-w5f9t" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-w5f9t"
I1101 23:22:46.848809 1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-fxmwh"
I1101 23:22:46.862507 1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-fxmwh"
I1101 23:22:46.864054 1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-fv4km"
I1101 23:22:46.864872 1 event.go:294] "Event occurred" object="multinode-230145-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-230145-m02 status is now: NodeNotReady"
I1101 23:22:46.876769 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-7qp72" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1101 23:22:46.888643 1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-fv4km"
I1101 23:22:46.895901 1 event.go:294] "Event occurred" object="kube-system/kindnet-5wmdc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1101 23:26:50.513571 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-rks76" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-rks76"
W1101 23:26:50.514026 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-230145-m02" does not exist
I1101 23:26:50.533276 1 range_allocator.go:367] Set node multinode-230145-m02 PodCIDR to [10.244.1.0/24]
W1101 23:27:00.688022 1 topologycache.go:199] Can't get CPU or zone information for multinode-230145-m02 node
I1101 23:27:01.939546 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-rks76" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-rks76"
*
* ==> kube-controller-manager [9b38a0defca0] <==
* I1101 23:07:33.238157 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1101 23:07:33.258726 1 shared_informer.go:262] Caches are synced for garbage collector
W1101 23:08:12.749946 1 topologycache.go:199] Can't get CPU or zone information for multinode-230145-m03 node
I1101 23:08:12.751803 1 event.go:294] "Event occurred" object="multinode-230145-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-230145-m02 status is now: NodeNotReady"
I1101 23:08:12.763942 1 event.go:294] "Event occurred" object="kube-system/kindnet-5wmdc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1101 23:08:12.770981 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-rks76" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1101 23:08:12.779409 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-7qp72" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1101 23:08:12.793206 1 event.go:294] "Event occurred" object="multinode-230145-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-230145-m03 status is now: NodeNotReady"
I1101 23:08:12.811154 1 event.go:294] "Event occurred" object="kube-system/kindnet-fv4km" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1101 23:08:12.822817 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-fxmwh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1101 23:12:05.948656 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-z9v5j"
I1101 23:12:09.948830 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-rks76" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-rks76"
W1101 23:12:09.948935 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-230145-m02" does not exist
I1101 23:12:09.956279 1 range_allocator.go:367] Set node multinode-230145-m02 PodCIDR to [10.244.1.0/24]
W1101 23:12:20.328919 1 topologycache.go:199] Can't get CPU or zone information for multinode-230145-m02 node
I1101 23:12:22.881180 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-rks76" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-rks76"
W1101 23:16:45.314768 1 topologycache.go:199] Can't get CPU or zone information for multinode-230145-m02 node
W1101 23:16:46.123141 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-230145-m03" does not exist
W1101 23:16:46.123181 1 topologycache.go:199] Can't get CPU or zone information for multinode-230145-m02 node
I1101 23:16:46.130776 1 range_allocator.go:367] Set node multinode-230145-m03 PodCIDR to [10.244.2.0/24]
W1101 23:17:06.503304 1 topologycache.go:199] Can't get CPU or zone information for multinode-230145-m02 node
I1101 23:17:07.941005 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-z9v5j" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-z9v5j"
I1101 23:21:08.995329 1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-mr7vs"
W1101 23:21:12.010633 1 topologycache.go:199] Can't get CPU or zone information for multinode-230145-m02 node
I1101 23:21:12.983473 1 event.go:294] "Event occurred" object="multinode-230145-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-230145-m03 event: Removing Node multinode-230145-m03 from Controller"
*
* ==> kube-proxy [18a780479003] <==
* I1101 23:21:59.088022 1 node.go:163] Successfully retrieved node IP: 192.168.39.139
I1101 23:21:59.088334 1 server_others.go:138] "Detected node IP" address="192.168.39.139"
I1101 23:21:59.088571 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1101 23:21:59.165313 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I1101 23:21:59.165424 1 server_others.go:206] "Using iptables Proxier"
I1101 23:21:59.166091 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1101 23:21:59.166958 1 server.go:661] "Version info" version="v1.25.3"
I1101 23:21:59.166970 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 23:21:59.168735 1 config.go:317] "Starting service config controller"
I1101 23:21:59.168798 1 shared_informer.go:255] Waiting for caches to sync for service config
I1101 23:21:59.168833 1 config.go:226] "Starting endpoint slice config controller"
I1101 23:21:59.168954 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1101 23:21:59.171058 1 config.go:444] "Starting node config controller"
I1101 23:21:59.171311 1 shared_informer.go:255] Waiting for caches to sync for node config
I1101 23:21:59.269371 1 shared_informer.go:262] Caches are synced for endpoint slice config
I1101 23:21:59.269408 1 shared_informer.go:262] Caches are synced for service config
I1101 23:21:59.271505 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [5c4ab848a437] <==
* I1101 23:07:22.857176 1 node.go:163] Successfully retrieved node IP: 192.168.39.139
I1101 23:07:22.857226 1 server_others.go:138] "Detected node IP" address="192.168.39.139"
I1101 23:07:22.857271 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1101 23:07:22.968096 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I1101 23:07:22.968135 1 server_others.go:206] "Using iptables Proxier"
I1101 23:07:22.969139 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1101 23:07:22.970110 1 server.go:661] "Version info" version="v1.25.3"
I1101 23:07:22.970146 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 23:07:22.976399 1 config.go:444] "Starting node config controller"
I1101 23:07:22.976721 1 shared_informer.go:255] Waiting for caches to sync for node config
I1101 23:07:22.976931 1 config.go:317] "Starting service config controller"
I1101 23:07:22.976938 1 shared_informer.go:255] Waiting for caches to sync for service config
I1101 23:07:22.976953 1 config.go:226] "Starting endpoint slice config controller"
I1101 23:07:22.976956 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1101 23:07:23.076979 1 shared_informer.go:262] Caches are synced for service config
I1101 23:07:23.076995 1 shared_informer.go:262] Caches are synced for node config
I1101 23:07:23.077263 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [e48288e1f6fe] <==
* I1101 23:07:17.100195 1 serving.go:348] Generated self-signed cert in-memory
W1101 23:07:19.715765 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1101 23:07:19.715823 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1101 23:07:19.715834 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1101 23:07:19.715843 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1101 23:07:19.757160 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1101 23:07:19.757211 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 23:07:19.761133 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1101 23:07:19.761278 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1101 23:07:19.762790 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1101 23:07:19.768240 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1101 23:07:19.871084 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [f2267f3af320] <==
* I1101 23:21:52.288625 1 serving.go:348] Generated self-signed cert in-memory
W1101 23:21:54.336049 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1101 23:21:54.336075 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1101 23:21:54.336086 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1101 23:21:54.336094 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1101 23:21:54.396408 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1101 23:21:54.396450 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 23:21:54.399327 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1101 23:21:54.399446 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1101 23:21:54.401415 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1101 23:21:54.402233 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1101 23:21:54.503067 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Tue 2022-11-01 23:21:30 UTC, ends at Tue 2022-11-01 23:31:46 UTC. --
Nov 01 23:31:02 multinode-230145 kubelet[1264]: E1101 23:31:02.461815 1264 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"090bce6a-20eb-4504-835e-b80728536333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-w5f9t_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-w5f9t" podUID=090bce6a-20eb-4504-835e-b80728536333
Nov 01 23:31:12 multinode-230145 kubelet[1264]: E1101 23:31:12.461949 1264 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-ws9nv_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="d9b09c2c3dde944d47972081d1632b2ecd85013632688b7d94c9013929b9bcde"
Nov 01 23:31:12 multinode-230145 kubelet[1264]: E1101 23:31:12.462388 1264 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:d9b09c2c3dde944d47972081d1632b2ecd85013632688b7d94c9013929b9bcde}
Nov 01 23:31:12 multinode-230145 kubelet[1264]: E1101 23:31:12.462479 1264 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a552bb3d-2d06-454c-addd-b1cff317827f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-ws9nv_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Nov 01 23:31:12 multinode-230145 kubelet[1264]: E1101 23:31:12.462542 1264 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a552bb3d-2d06-454c-addd-b1cff317827f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-ws9nv_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-ws9nv" podUID=a552bb3d-2d06-454c-addd-b1cff317827f
Nov 01 23:31:13 multinode-230145 kubelet[1264]: E1101 23:31:13.462257 1264 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-w5f9t_default\" network: could not retrieve port mappings: key is not found" podSandboxID="4173c8da0a2ed44a19b9f022773cf2a4818d786d80a6a44ee3e1e97baba75c61"
Nov 01 23:31:13 multinode-230145 kubelet[1264]: E1101 23:31:13.462600 1264 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:4173c8da0a2ed44a19b9f022773cf2a4818d786d80a6a44ee3e1e97baba75c61}
Nov 01 23:31:13 multinode-230145 kubelet[1264]: E1101 23:31:13.462736 1264 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"090bce6a-20eb-4504-835e-b80728536333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-w5f9t_default\\\" network: could not retrieve port mappings: key is not found\""
Nov 01 23:31:13 multinode-230145 kubelet[1264]: E1101 23:31:13.463589 1264 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"090bce6a-20eb-4504-835e-b80728536333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-w5f9t_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-w5f9t" podUID=090bce6a-20eb-4504-835e-b80728536333
Nov 01 23:31:23 multinode-230145 kubelet[1264]: E1101 23:31:23.460857 1264 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-ws9nv_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="d9b09c2c3dde944d47972081d1632b2ecd85013632688b7d94c9013929b9bcde"
Nov 01 23:31:23 multinode-230145 kubelet[1264]: E1101 23:31:23.461524 1264 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:d9b09c2c3dde944d47972081d1632b2ecd85013632688b7d94c9013929b9bcde}
Nov 01 23:31:23 multinode-230145 kubelet[1264]: E1101 23:31:23.461691 1264 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a552bb3d-2d06-454c-addd-b1cff317827f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-ws9nv_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Nov 01 23:31:23 multinode-230145 kubelet[1264]: E1101 23:31:23.461938 1264 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a552bb3d-2d06-454c-addd-b1cff317827f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-ws9nv_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-ws9nv" podUID=a552bb3d-2d06-454c-addd-b1cff317827f
Nov 01 23:31:25 multinode-230145 kubelet[1264]: E1101 23:31:25.461114 1264 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-w5f9t_default\" network: could not retrieve port mappings: key is not found" podSandboxID="4173c8da0a2ed44a19b9f022773cf2a4818d786d80a6a44ee3e1e97baba75c61"
Nov 01 23:31:25 multinode-230145 kubelet[1264]: E1101 23:31:25.461334 1264 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:4173c8da0a2ed44a19b9f022773cf2a4818d786d80a6a44ee3e1e97baba75c61}
Nov 01 23:31:25 multinode-230145 kubelet[1264]: E1101 23:31:25.461439 1264 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"090bce6a-20eb-4504-835e-b80728536333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-w5f9t_default\\\" network: could not retrieve port mappings: key is not found\""
Nov 01 23:31:25 multinode-230145 kubelet[1264]: E1101 23:31:25.461468 1264 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"090bce6a-20eb-4504-835e-b80728536333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-w5f9t_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-w5f9t" podUID=090bce6a-20eb-4504-835e-b80728536333
Nov 01 23:31:38 multinode-230145 kubelet[1264]: E1101 23:31:38.463417 1264 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-w5f9t_default\" network: could not retrieve port mappings: key is not found" podSandboxID="4173c8da0a2ed44a19b9f022773cf2a4818d786d80a6a44ee3e1e97baba75c61"
Nov 01 23:31:38 multinode-230145 kubelet[1264]: E1101 23:31:38.463419 1264 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-ws9nv_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="d9b09c2c3dde944d47972081d1632b2ecd85013632688b7d94c9013929b9bcde"
Nov 01 23:31:38 multinode-230145 kubelet[1264]: E1101 23:31:38.463651 1264 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:d9b09c2c3dde944d47972081d1632b2ecd85013632688b7d94c9013929b9bcde}
Nov 01 23:31:38 multinode-230145 kubelet[1264]: E1101 23:31:38.463692 1264 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a552bb3d-2d06-454c-addd-b1cff317827f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-ws9nv_kube-system\\\" network: could not retrieve port mappings: key is not found\""
Nov 01 23:31:38 multinode-230145 kubelet[1264]: E1101 23:31:38.463717 1264 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a552bb3d-2d06-454c-addd-b1cff317827f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-ws9nv_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-ws9nv" podUID=a552bb3d-2d06-454c-addd-b1cff317827f
Nov 01 23:31:38 multinode-230145 kubelet[1264]: E1101 23:31:38.463738 1264 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:4173c8da0a2ed44a19b9f022773cf2a4818d786d80a6a44ee3e1e97baba75c61}
Nov 01 23:31:38 multinode-230145 kubelet[1264]: E1101 23:31:38.463762 1264 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"090bce6a-20eb-4504-835e-b80728536333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-w5f9t_default\\\" network: could not retrieve port mappings: key is not found\""
Nov 01 23:31:38 multinode-230145 kubelet[1264]: E1101 23:31:38.463779 1264 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"090bce6a-20eb-4504-835e-b80728536333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-w5f9t_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-w5f9t" podUID=090bce6a-20eb-4504-835e-b80728536333
*
* ==> storage-provisioner [60aa48a1379b] <==
* I1101 23:08:06.658214 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1101 23:08:06.669738 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1101 23:08:06.670408 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1101 23:08:24.074510 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1101 23:08:24.075472 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"03b5e838-c307-45b3-a8df-b8f993b37ae4", APIVersion:"v1", ResourceVersion:"894", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-230145_9a22516b-8334-4db3-be1a-68a379579ce1 became leader
I1101 23:08:24.076346 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-230145_9a22516b-8334-4db3-be1a-68a379579ce1!
I1101 23:08:24.178525 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-230145_9a22516b-8334-4db3-be1a-68a379579ce1!
*
* ==> storage-provisioner [bb4a76c8496d] <==
* I1101 23:21:59.820808 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1101 23:21:59.847492 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1101 23:21:59.847544 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1101 23:22:17.245965 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1101 23:22:17.246272 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-230145_32877ac1-1a0b-4b57-b136-3189cc31a6bc!
I1101 23:22:17.248240 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"03b5e838-c307-45b3-a8df-b8f993b37ae4", APIVersion:"v1", ResourceVersion:"1826", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-230145_32877ac1-1a0b-4b57-b136-3189cc31a6bc became leader
I1101 23:22:17.348381 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-230145_32877ac1-1a0b-4b57-b136-3189cc31a6bc!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-230145 -n multinode-230145
helpers_test.go:261: (dbg) Run: kubectl --context multinode-230145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-mr7vs
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context multinode-230145 describe pod busybox-65db55d5d6-mr7vs
helpers_test.go:280: (dbg) kubectl --context multinode-230145 describe pod busybox-65db55d5d6-mr7vs:
-- stdout --
Name: busybox-65db55d5d6-mr7vs
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: app=busybox
pod-template-hash=65db55d5d6
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/busybox-65db55d5d6
Containers:
busybox:
Image: gcr.io/k8s-minikube/busybox:1.28
Port: <none>
Host Port: <none>
Command:
sleep
3600
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q4z87 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-q4z87:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Warning FailedScheduling 8m58s default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 4m47s (x3 over 9m53s) default-scheduler 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (44.56s)