=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-171421 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E1031 17:15:22.185807 10599 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/functional-164306/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-171421 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (1m17.476771497s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-171421 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-171421 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.709112397s)
preload_test.go:67: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-171421 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.6
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-171421 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.6: (43.194161108s)
preload_test.go:76: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-171421 -- sudo crictl image ls
preload_test.go:81: Expected to find gcr.io/k8s-minikube/busybox in output of `docker images`, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.6 860f263331c95 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.6 c6c20157a4233 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-proxy v1.24.6 0bb39497ab33b 39.5MB
k8s.gcr.io/kube-scheduler v1.24.6 c786c777a4e1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2022-10-31 17:16:24.328807496 +0000 UTC m=+2434.148964394
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-171421 -n test-preload-171421
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-171421 logs -n 25
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| cp | multinode-165353 cp multinode-165353-m03:/home/docker/cp-test.txt | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:57 UTC |
| | multinode-165353:/home/docker/cp-test_multinode-165353-m03_multinode-165353.txt | | | | | |
| ssh | multinode-165353 ssh -n | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:57 UTC |
| | multinode-165353-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-165353 ssh -n multinode-165353 sudo cat | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:57 UTC |
| | /home/docker/cp-test_multinode-165353-m03_multinode-165353.txt | | | | | |
| cp | multinode-165353 cp multinode-165353-m03:/home/docker/cp-test.txt | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:57 UTC |
| | multinode-165353-m02:/home/docker/cp-test_multinode-165353-m03_multinode-165353-m02.txt | | | | | |
| ssh | multinode-165353 ssh -n | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:57 UTC |
| | multinode-165353-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-165353 ssh -n multinode-165353-m02 sudo cat | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:57 UTC |
| | /home/docker/cp-test_multinode-165353-m03_multinode-165353-m02.txt | | | | | |
| node | multinode-165353 node stop m03 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:57 UTC |
| node | multinode-165353 node start | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:58 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-165353 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:58 UTC | |
| stop | -p multinode-165353 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 16:58 UTC | 31 Oct 22 17:01 UTC |
| start | -p multinode-165353 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 17:01 UTC | 31 Oct 22 17:07 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-165353 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 17:07 UTC | |
| node | multinode-165353 node delete | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 17:07 UTC | 31 Oct 22 17:07 UTC |
| | m03 | | | | | |
| stop | multinode-165353 stop | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 17:07 UTC | 31 Oct 22 17:10 UTC |
| start | -p multinode-165353 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 17:10 UTC | 31 Oct 22 17:13 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-165353 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 17:13 UTC | |
| start | -p multinode-165353-m02 | multinode-165353-m02 | jenkins | v1.27.1 | 31 Oct 22 17:13 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-165353-m03 | multinode-165353-m03 | jenkins | v1.27.1 | 31 Oct 22 17:13 UTC | 31 Oct 22 17:14 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-165353 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 17:14 UTC | |
| delete | -p multinode-165353-m03 | multinode-165353-m03 | jenkins | v1.27.1 | 31 Oct 22 17:14 UTC | 31 Oct 22 17:14 UTC |
| delete | -p multinode-165353 | multinode-165353 | jenkins | v1.27.1 | 31 Oct 22 17:14 UTC | 31 Oct 22 17:14 UTC |
| start | -p test-preload-171421 | test-preload-171421 | jenkins | v1.27.1 | 31 Oct 22 17:14 UTC | 31 Oct 22 17:15 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-171421 | test-preload-171421 | jenkins | v1.27.1 | 31 Oct 22 17:15 UTC | 31 Oct 22 17:15 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| start | -p test-preload-171421 | test-preload-171421 | jenkins | v1.27.1 | 31 Oct 22 17:15 UTC | 31 Oct 22 17:16 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.6 | | | | | |
| ssh | -p test-preload-171421 -- sudo | test-preload-171421 | jenkins | v1.27.1 | 31 Oct 22 17:16 UTC | 31 Oct 22 17:16 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/10/31 17:15:40
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1031 17:15:40.956043 23640 out.go:296] Setting OutFile to fd 1 ...
I1031 17:15:40.956163 23640 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:15:40.956179 23640 out.go:309] Setting ErrFile to fd 2...
I1031 17:15:40.956186 23640 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:15:40.956327 23640 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3810/.minikube/bin
I1031 17:15:40.956987 23640 out.go:303] Setting JSON to false
I1031 17:15:40.957885 23640 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3494,"bootTime":1667233047,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1031 17:15:40.957954 23640 start.go:126] virtualization: kvm guest
I1031 17:15:40.960283 23640 out.go:177] * [test-preload-171421] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1031 17:15:40.961854 23640 out.go:177] - MINIKUBE_LOCATION=15232
I1031 17:15:40.961774 23640 notify.go:220] Checking for updates...
I1031 17:15:40.964520 23640 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1031 17:15:40.966010 23640 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-3810/kubeconfig
I1031 17:15:40.967585 23640 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3810/.minikube
I1031 17:15:40.969042 23640 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1031 17:15:40.971077 23640 config.go:180] Loaded profile config "test-preload-171421": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I1031 17:15:40.971553 23640 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1031 17:15:40.971605 23640 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 17:15:40.985999 23640 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42101
I1031 17:15:40.986439 23640 main.go:134] libmachine: () Calling .GetVersion
I1031 17:15:40.987029 23640 main.go:134] libmachine: Using API Version 1
I1031 17:15:40.987053 23640 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 17:15:40.987351 23640 main.go:134] libmachine: () Calling .GetMachineName
I1031 17:15:40.987550 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:40.989511 23640 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I1031 17:15:40.990923 23640 driver.go:365] Setting default libvirt URI to qemu:///system
I1031 17:15:40.991240 23640 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1031 17:15:40.991284 23640 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 17:15:41.005657 23640 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43617
I1031 17:15:41.006004 23640 main.go:134] libmachine: () Calling .GetVersion
I1031 17:15:41.006450 23640 main.go:134] libmachine: Using API Version 1
I1031 17:15:41.006475 23640 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 17:15:41.006788 23640 main.go:134] libmachine: () Calling .GetMachineName
I1031 17:15:41.007002 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:41.039890 23640 out.go:177] * Using the kvm2 driver based on existing profile
I1031 17:15:41.041283 23640 start.go:282] selected driver: kvm2
I1031 17:15:41.041307 23640 start.go:808] validating driver "kvm2" against &{Name:test-preload-171421 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-171421 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:15:41.041402 23640 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1031 17:15:41.042381 23640 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 17:15:41.042597 23640 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15232-3810/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1031 17:15:41.057106 23640 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.27.1
I1031 17:15:41.057491 23640 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1031 17:15:41.057528 23640 cni.go:95] Creating CNI manager for ""
I1031 17:15:41.057537 23640 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I1031 17:15:41.057552 23640 start_flags.go:317] config:
{Name:test-preload-171421 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-171421 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:15:41.057676 23640 iso.go:124] acquiring lock: {Name:mkfbe90c43010bc4f57d56b04a2f7ded914fe53a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 17:15:41.059659 23640 out.go:177] * Starting control plane node test-preload-171421 in cluster test-preload-171421
I1031 17:15:41.061204 23640 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:15:41.166009 23640 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1031 17:15:41.166040 23640 cache.go:57] Caching tarball of preloaded images
I1031 17:15:41.166184 23640 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:15:41.168018 23640 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I1031 17:15:41.169556 23640 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:15:41.282871 23640 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-3810/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1031 17:15:47.292423 23640 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:15:47.292506 23640 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-3810/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:15:48.158999 23640 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I1031 17:15:48.159135 23640 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/config.json ...
I1031 17:15:48.159333 23640 cache.go:208] Successfully downloaded all kic artifacts
I1031 17:15:48.159365 23640 start.go:364] acquiring machines lock for test-preload-171421: {Name:mkfd5623925c49bca18831d101d6fb5ac0a67f16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1031 17:15:48.159456 23640 start.go:368] acquired machines lock for "test-preload-171421" in 72.54µs
I1031 17:15:48.159472 23640 start.go:96] Skipping create...Using existing machine configuration
I1031 17:15:48.159476 23640 fix.go:55] fixHost starting:
I1031 17:15:48.159771 23640 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1031 17:15:48.159807 23640 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 17:15:48.174327 23640 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36737
I1031 17:15:48.174759 23640 main.go:134] libmachine: () Calling .GetVersion
I1031 17:15:48.175235 23640 main.go:134] libmachine: Using API Version 1
I1031 17:15:48.175261 23640 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 17:15:48.175581 23640 main.go:134] libmachine: () Calling .GetMachineName
I1031 17:15:48.175744 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:48.175900 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetState
I1031 17:15:48.177261 23640 fix.go:103] recreateIfNeeded on test-preload-171421: state=Running err=<nil>
W1031 17:15:48.177283 23640 fix.go:129] unexpected machine state, will restart: <nil>
I1031 17:15:48.180429 23640 out.go:177] * Updating the running kvm2 "test-preload-171421" VM ...
I1031 17:15:48.181723 23640 machine.go:88] provisioning docker machine ...
I1031 17:15:48.181741 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:48.181909 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetMachineName
I1031 17:15:48.182066 23640 buildroot.go:166] provisioning hostname "test-preload-171421"
I1031 17:15:48.182089 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetMachineName
I1031 17:15:48.182243 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:15:48.184459 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.184837 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:48.184866 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.185061 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:15:48.185252 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:48.185414 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:48.185553 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:15:48.185708 23640 main.go:134] libmachine: Using SSH client type: native
I1031 17:15:48.185838 23640 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1031 17:15:48.185852 23640 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-171421 && echo "test-preload-171421" | sudo tee /etc/hostname
I1031 17:15:48.326479 23640 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-171421
I1031 17:15:48.326502 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:15:48.329111 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.329484 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:48.329512 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.329726 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:15:48.329904 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:48.330058 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:48.330210 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:15:48.330390 23640 main.go:134] libmachine: Using SSH client type: native
I1031 17:15:48.330511 23640 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1031 17:15:48.330530 23640 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-171421' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-171421/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-171421' | sudo tee -a /etc/hosts;
fi
fi
I1031 17:15:48.459099 23640 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1031 17:15:48.459129 23640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3810/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3810/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3810/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3810/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3810/.minikube}
I1031 17:15:48.459154 23640 buildroot.go:174] setting up certificates
I1031 17:15:48.459170 23640 provision.go:83] configureAuth start
I1031 17:15:48.459183 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetMachineName
I1031 17:15:48.459380 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetIP
I1031 17:15:48.461996 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.462349 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:48.462390 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.462529 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:15:48.464584 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.464898 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:48.464934 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.465073 23640 provision.go:138] copyHostCerts
I1031 17:15:48.465117 23640 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3810/.minikube/ca.pem, removing ...
I1031 17:15:48.465130 23640 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3810/.minikube/ca.pem
I1031 17:15:48.465191 23640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3810/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3810/.minikube/ca.pem (1078 bytes)
I1031 17:15:48.465255 23640 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3810/.minikube/cert.pem, removing ...
I1031 17:15:48.465263 23640 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3810/.minikube/cert.pem
I1031 17:15:48.465289 23640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3810/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3810/.minikube/cert.pem (1123 bytes)
I1031 17:15:48.465334 23640 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3810/.minikube/key.pem, removing ...
I1031 17:15:48.465341 23640 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3810/.minikube/key.pem
I1031 17:15:48.465380 23640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3810/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3810/.minikube/key.pem (1679 bytes)
I1031 17:15:48.465419 23640 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3810/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3810/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3810/.minikube/certs/ca-key.pem org=jenkins.test-preload-171421 san=[192.168.39.249 192.168.39.249 localhost 127.0.0.1 minikube test-preload-171421]
I1031 17:15:48.647929 23640 provision.go:172] copyRemoteCerts
I1031 17:15:48.647968 23640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1031 17:15:48.647983 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:15:48.649997 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.650269 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:48.650301 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.650416 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:15:48.650588 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:48.650742 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:15:48.650870 23640 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/machines/test-preload-171421/id_rsa Username:docker}
I1031 17:15:48.744490 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1031 17:15:48.767380 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1031 17:15:48.788509 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1031 17:15:48.810401 23640 provision.go:86] duration metric: configureAuth took 351.220884ms
I1031 17:15:48.810424 23640 buildroot.go:189] setting minikube options for container-runtime
I1031 17:15:48.810602 23640 config.go:180] Loaded profile config "test-preload-171421": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1031 17:15:48.810617 23640 machine.go:91] provisioned docker machine in 628.881867ms
I1031 17:15:48.810626 23640 start.go:300] post-start starting for "test-preload-171421" (driver="kvm2")
I1031 17:15:48.810634 23640 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1031 17:15:48.810665 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:48.810962 23640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1031 17:15:48.810993 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:15:48.813429 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.813778 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:48.813810 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.813898 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:15:48.814077 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:48.814221 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:15:48.814352 23640 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/machines/test-preload-171421/id_rsa Username:docker}
I1031 17:15:48.908607 23640 ssh_runner.go:195] Run: cat /etc/os-release
I1031 17:15:48.913669 23640 info.go:137] Remote host: Buildroot 2021.02.12
I1031 17:15:48.913696 23640 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3810/.minikube/addons for local assets ...
I1031 17:15:48.913767 23640 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3810/.minikube/files for local assets ...
I1031 17:15:48.913966 23640 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3810/.minikube/files/etc/ssl/certs/105992.pem -> 105992.pem in /etc/ssl/certs
I1031 17:15:48.914074 23640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1031 17:15:48.928398 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/files/etc/ssl/certs/105992.pem --> /etc/ssl/certs/105992.pem (1708 bytes)
I1031 17:15:48.952209 23640 start.go:303] post-start completed in 141.572161ms
I1031 17:15:48.952230 23640 fix.go:57] fixHost completed within 792.754081ms
I1031 17:15:48.952253 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:15:48.954822 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.955235 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:48.955257 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:48.955481 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:15:48.955667 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:48.955860 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:48.956003 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:15:48.956172 23640 main.go:134] libmachine: Using SSH client type: native
I1031 17:15:48.956343 23640 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1031 17:15:48.956367 23640 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1031 17:15:49.086980 23640 main.go:134] libmachine: SSH cmd err, output: <nil>: 1667236549.081935579
I1031 17:15:49.086999 23640 fix.go:207] guest clock: 1667236549.081935579
I1031 17:15:49.087012 23640 fix.go:220] Guest: 2022-10-31 17:15:49.081935579 +0000 UTC Remote: 2022-10-31 17:15:48.952233717 +0000 UTC m=+8.056600723 (delta=129.701862ms)
I1031 17:15:49.087032 23640 fix.go:191] guest clock delta is within tolerance: 129.701862ms
I1031 17:15:49.087039 23640 start.go:83] releasing machines lock for "test-preload-171421", held for 927.570711ms
I1031 17:15:49.087082 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:49.087355 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetIP
I1031 17:15:49.089897 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:49.090276 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:49.090306 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:49.090466 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:49.090969 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:49.091147 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:15:49.091236 23640 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1031 17:15:49.091288 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:15:49.091408 23640 ssh_runner.go:195] Run: systemctl --version
I1031 17:15:49.091436 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:15:49.093874 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:49.094208 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:49.094250 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:49.094269 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:49.094360 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:15:49.094567 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:49.094652 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:49.094678 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:49.094750 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:15:49.094893 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:15:49.094913 23640 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/machines/test-preload-171421/id_rsa Username:docker}
I1031 17:15:49.095066 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:15:49.095228 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:15:49.095353 23640 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/machines/test-preload-171421/id_rsa Username:docker}
I1031 17:15:49.195056 23640 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:15:49.195241 23640 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 17:15:49.222940 23640 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I1031 17:15:49.223016 23640 ssh_runner.go:195] Run: which lz4
I1031 17:15:49.228098 23640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I1031 17:15:49.232094 23640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1031 17:15:49.232117 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I1031 17:15:51.321872 23640 containerd.go:496] Took 2.093793 seconds to copy over tarball
I1031 17:15:51.321935 23640 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1031 17:15:54.387621 23640 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.065660493s)
I1031 17:15:54.387666 23640 containerd.go:503] Took 3.065756 seconds t extract the tarball
I1031 17:15:54.387678 23640 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1031 17:15:54.433154 23640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 17:15:54.554088 23640 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1031 17:15:54.591019 23640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1031 17:15:54.603997 23640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1031 17:15:54.616077 23640 docker.go:189] disabling docker service ...
I1031 17:15:54.616112 23640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1031 17:15:54.629633 23640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1031 17:15:54.641573 23640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1031 17:15:54.774630 23640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1031 17:15:54.947337 23640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1031 17:15:55.011902 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1031 17:15:55.039366 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I1031 17:15:55.048961 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I1031 17:15:55.058388 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I1031 17:15:55.067952 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
I1031 17:15:55.077252 23640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1031 17:15:55.085765 23640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1031 17:15:55.094463 23640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 17:15:55.236052 23640 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1031 17:15:55.264069 23640 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I1031 17:15:55.264144 23640 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1031 17:15:55.268680 23640 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I1031 17:15:56.373660 23640 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1031 17:15:56.377848 23640 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I1031 17:15:58.539029 23640 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1031 17:15:58.544127 23640 start.go:472] Will wait 60s for crictl version
I1031 17:15:58.544180 23640 ssh_runner.go:195] Run: sudo crictl version
I1031 17:15:58.607163 23640 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.8
RuntimeApiVersion: v1alpha2
I1031 17:15:58.607224 23640 ssh_runner.go:195] Run: containerd --version
I1031 17:15:58.710710 23640 ssh_runner.go:195] Run: containerd --version
I1031 17:15:58.757592 23640 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.8 ...
I1031 17:15:58.758989 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetIP
I1031 17:15:58.761753 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:58.762152 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:15:58.762184 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:15:58.762395 23640 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1031 17:15:58.767479 23640 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:15:58.767543 23640 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 17:15:58.805866 23640 containerd.go:553] all images are preloaded for containerd runtime.
I1031 17:15:58.805886 23640 containerd.go:467] Images already preloaded, skipping extraction
I1031 17:15:58.805932 23640 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 17:15:58.841757 23640 containerd.go:553] all images are preloaded for containerd runtime.
I1031 17:15:58.841776 23640 cache_images.go:84] Images are preloaded, skipping loading
I1031 17:15:58.841817 23640 ssh_runner.go:195] Run: sudo crictl info
I1031 17:15:58.874133 23640 cni.go:95] Creating CNI manager for ""
I1031 17:15:58.874155 23640 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I1031 17:15:58.874170 23640 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1031 17:15:58.874188 23640 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-171421 NodeName:test-preload-171421 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1031 17:15:58.874367 23640 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.249
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-171421"
kubeletExtraArgs:
node-ip: 192.168.39.249
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1031 17:15:58.874481 23640 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-171421 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-171421 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1031 17:15:58.874540 23640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I1031 17:15:58.883507 23640 binaries.go:44] Found k8s binaries, skipping transfer
I1031 17:15:58.883583 23640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1031 17:15:58.891555 23640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (514 bytes)
I1031 17:15:58.924601 23640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1031 17:15:58.947212 23640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
I1031 17:15:58.962839 23640 ssh_runner.go:195] Run: grep 192.168.39.249 control-plane.minikube.internal$ /etc/hosts
I1031 17:15:58.966247 23640 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421 for IP: 192.168.39.249
I1031 17:15:58.966331 23640 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3810/.minikube/ca.key
I1031 17:15:58.966379 23640 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3810/.minikube/proxy-client-ca.key
I1031 17:15:58.966444 23640 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/client.key
I1031 17:15:58.966493 23640 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/apiserver.key.671bc323
I1031 17:15:58.966537 23640 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/proxy-client.key
I1031 17:15:58.966620 23640 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3810/.minikube/certs/home/jenkins/minikube-integration/15232-3810/.minikube/certs/10599.pem (1338 bytes)
W1031 17:15:58.966645 23640 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3810/.minikube/certs/home/jenkins/minikube-integration/15232-3810/.minikube/certs/10599_empty.pem, impossibly tiny 0 bytes
I1031 17:15:58.966655 23640 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3810/.minikube/certs/home/jenkins/minikube-integration/15232-3810/.minikube/certs/ca-key.pem (1679 bytes)
I1031 17:15:58.966678 23640 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3810/.minikube/certs/home/jenkins/minikube-integration/15232-3810/.minikube/certs/ca.pem (1078 bytes)
I1031 17:15:58.966701 23640 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3810/.minikube/certs/home/jenkins/minikube-integration/15232-3810/.minikube/certs/cert.pem (1123 bytes)
I1031 17:15:58.966726 23640 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3810/.minikube/certs/home/jenkins/minikube-integration/15232-3810/.minikube/certs/key.pem (1679 bytes)
I1031 17:15:58.966762 23640 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3810/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3810/.minikube/files/etc/ssl/certs/105992.pem (1708 bytes)
I1031 17:15:58.967322 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1031 17:15:58.987878 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1031 17:15:59.008231 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1031 17:15:59.039900 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1031 17:15:59.060383 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1031 17:15:59.085602 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1031 17:15:59.119319 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1031 17:15:59.154740 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1031 17:15:59.187280 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1031 17:15:59.236887 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/certs/10599.pem --> /usr/share/ca-certificates/10599.pem (1338 bytes)
I1031 17:15:59.269763 23640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3810/.minikube/files/etc/ssl/certs/105992.pem --> /usr/share/ca-certificates/105992.pem (1708 bytes)
I1031 17:15:59.302981 23640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1031 17:15:59.323725 23640 ssh_runner.go:195] Run: openssl version
I1031 17:15:59.328742 23640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10599.pem && ln -fs /usr/share/ca-certificates/10599.pem /etc/ssl/certs/10599.pem"
I1031 17:15:59.338447 23640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10599.pem
I1031 17:15:59.342612 23640 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 16:43 /usr/share/ca-certificates/10599.pem
I1031 17:15:59.342662 23640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10599.pem
I1031 17:15:59.347833 23640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10599.pem /etc/ssl/certs/51391683.0"
I1031 17:15:59.356184 23640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105992.pem && ln -fs /usr/share/ca-certificates/105992.pem /etc/ssl/certs/105992.pem"
I1031 17:15:59.365873 23640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105992.pem
I1031 17:15:59.369957 23640 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 16:43 /usr/share/ca-certificates/105992.pem
I1031 17:15:59.369997 23640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105992.pem
I1031 17:15:59.375044 23640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105992.pem /etc/ssl/certs/3ec20f2e.0"
I1031 17:15:59.383440 23640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1031 17:15:59.393220 23640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1031 17:15:59.397207 23640 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 16:36 /usr/share/ca-certificates/minikubeCA.pem
I1031 17:15:59.397238 23640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1031 17:15:59.402339 23640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1031 17:15:59.410841 23640 kubeadm.go:396] StartCluster: {Name:test-preload-171421 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.6 ClusterName:test-preload-171421 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:15:59.410987 23640 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1031 17:15:59.411031 23640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1031 17:15:59.439493 23640 cri.go:87] found id: ""
I1031 17:15:59.439535 23640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1031 17:15:59.448414 23640 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1031 17:15:59.448434 23640 kubeadm.go:627] restartCluster start
I1031 17:15:59.448471 23640 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1031 17:15:59.457052 23640 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1031 17:15:59.457507 23640 kubeconfig.go:92] found "test-preload-171421" server: "https://192.168.39.249:8443"
I1031 17:15:59.458118 23640 kapi.go:59] client config for test-preload-171421: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1031 17:15:59.458636 23640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1031 17:15:59.466960 23640 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I1031 17:15:59.466972 23640 kubeadm.go:1114] stopping kube-system containers ...
I1031 17:15:59.466982 23640 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1031 17:15:59.467025 23640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1031 17:15:59.497469 23640 cri.go:87] found id: ""
I1031 17:15:59.497506 23640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1031 17:15:59.534172 23640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1031 17:15:59.549233 23640 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Oct 31 17:15 /etc/kubernetes/admin.conf
-rw------- 1 root root 5654 Oct 31 17:15 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Oct 31 17:15 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5606 Oct 31 17:15 /etc/kubernetes/scheduler.conf
I1031 17:15:59.549279 23640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1031 17:15:59.557634 23640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1031 17:15:59.564888 23640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1031 17:15:59.572136 23640 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1031 17:15:59.572176 23640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1031 17:15:59.580040 23640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1031 17:15:59.587317 23640 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1031 17:15:59.587359 23640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1031 17:15:59.594890 23640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1031 17:15:59.602590 23640 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1031 17:15:59.602605 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:15:59.690515 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:16:00.235333 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:16:00.583672 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:16:00.662024 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:16:00.728352 23640 api_server.go:51] waiting for apiserver process to appear ...
I1031 17:16:00.728438 23640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 17:16:01.241503 23640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 17:16:01.741430 23640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 17:16:02.241795 23640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 17:16:02.741745 23640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 17:16:02.764309 23640 api_server.go:71] duration metric: took 2.035962265s to wait for apiserver process to appear ...
I1031 17:16:02.764352 23640 api_server.go:87] waiting for apiserver healthz status ...
I1031 17:16:02.764372 23640 api_server.go:252] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
I1031 17:16:02.764834 23640 api_server.go:268] stopped: https://192.168.39.249:8443/healthz: Get "https://192.168.39.249:8443/healthz": dial tcp 192.168.39.249:8443: connect: connection refused
I1031 17:16:03.265802 23640 api_server.go:252] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
I1031 17:16:03.266412 23640 api_server.go:268] stopped: https://192.168.39.249:8443/healthz: Get "https://192.168.39.249:8443/healthz": dial tcp 192.168.39.249:8443: connect: connection refused
I1031 17:16:03.764921 23640 api_server.go:252] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
I1031 17:16:07.130069 23640 api_server.go:278] https://192.168.39.249:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1031 17:16:07.130106 23640 api_server.go:102] status: https://192.168.39.249:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1031 17:16:07.265322 23640 api_server.go:252] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
I1031 17:16:07.273976 23640 api_server.go:278] https://192.168.39.249:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1031 17:16:07.273999 23640 api_server.go:102] status: https://192.168.39.249:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1031 17:16:07.765559 23640 api_server.go:252] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
I1031 17:16:07.774065 23640 api_server.go:278] https://192.168.39.249:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1031 17:16:07.774088 23640 api_server.go:102] status: https://192.168.39.249:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1031 17:16:08.265158 23640 api_server.go:252] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
I1031 17:16:08.269827 23640 api_server.go:278] https://192.168.39.249:8443/healthz returned 200:
ok
I1031 17:16:08.276018 23640 api_server.go:140] control plane version: v1.24.6
I1031 17:16:08.276038 23640 api_server.go:130] duration metric: took 5.511679815s to wait for apiserver health ...
I1031 17:16:08.276045 23640 cni.go:95] Creating CNI manager for ""
I1031 17:16:08.276051 23640 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I1031 17:16:08.277784 23640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I1031 17:16:08.279115 23640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1031 17:16:08.290690 23640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I1031 17:16:08.310198 23640 system_pods.go:43] waiting for kube-system pods to appear ...
I1031 17:16:08.317803 23640 system_pods.go:59] 7 kube-system pods found
I1031 17:16:08.317827 23640 system_pods.go:61] "coredns-6d4b75cb6d-qppww" [8abf149a-6d21-4361-aaee-58255cef5d37] Running
I1031 17:16:08.317834 23640 system_pods.go:61] "etcd-test-preload-171421" [2bd608e1-cc23-4ff9-8124-84f2a3b6ef0b] Running
I1031 17:16:08.317840 23640 system_pods.go:61] "kube-apiserver-test-preload-171421" [f6efc2ad-15a8-4228-a9ec-e51f2821a190] Pending
I1031 17:16:08.317848 23640 system_pods.go:61] "kube-controller-manager-test-preload-171421" [53641179-fc82-443a-8809-604213cc3314] Pending
I1031 17:16:08.317855 23640 system_pods.go:61] "kube-proxy-lwv2g" [b3f3e853-9f68-44d9-8c2a-e0335d2c7a19] Running
I1031 17:16:08.317859 23640 system_pods.go:61] "kube-scheduler-test-preload-171421" [221f209c-2d90-495b-9603-0f39603f8cf7] Pending
I1031 17:16:08.317866 23640 system_pods.go:61] "storage-provisioner" [8801cb7c-45f6-485d-986f-2d5d6d7cd09e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 17:16:08.317901 23640 system_pods.go:74] duration metric: took 7.690686ms to wait for pod list to return data ...
I1031 17:16:08.317906 23640 node_conditions.go:102] verifying NodePressure condition ...
I1031 17:16:08.320932 23640 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1031 17:16:08.320967 23640 node_conditions.go:123] node cpu capacity is 2
I1031 17:16:08.320982 23640 node_conditions.go:105] duration metric: took 3.0711ms to run NodePressure ...
I1031 17:16:08.321010 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:16:08.492579 23640 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1031 17:16:08.496163 23640 kubeadm.go:778] kubelet initialised
I1031 17:16:08.496183 23640 kubeadm.go:779] duration metric: took 3.577063ms waiting for restarted kubelet to initialise ...
I1031 17:16:08.496191 23640 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 17:16:08.500770 23640 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qppww" in "kube-system" namespace to be "Ready" ...
I1031 17:16:08.504933 23640 pod_ready.go:92] pod "coredns-6d4b75cb6d-qppww" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:08.504955 23640 pod_ready.go:81] duration metric: took 4.159645ms waiting for pod "coredns-6d4b75cb6d-qppww" in "kube-system" namespace to be "Ready" ...
I1031 17:16:08.504969 23640 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:08.508834 23640 pod_ready.go:92] pod "etcd-test-preload-171421" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:08.508861 23640 pod_ready.go:81] duration metric: took 3.877535ms waiting for pod "etcd-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:08.508871 23640 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:10.520353 23640 pod_ready.go:102] pod "kube-apiserver-test-preload-171421" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
I1031 17:16:13.020657 23640 pod_ready.go:102] pod "kube-apiserver-test-preload-171421" in "kube-system" namespace has status "Ready":"False"
I1031 17:16:15.020098 23640 pod_ready.go:92] pod "kube-apiserver-test-preload-171421" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:15.020133 23640 pod_ready.go:81] duration metric: took 6.511249s waiting for pod "kube-apiserver-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:15.020151 23640 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:17.035700 23640 pod_ready.go:102] pod "kube-controller-manager-test-preload-171421" in "kube-system" namespace has status "Ready":"False"
I1031 17:16:19.533735 23640 pod_ready.go:102] pod "kube-controller-manager-test-preload-171421" in "kube-system" namespace has status "Ready":"False"
I1031 17:16:21.030594 23640 pod_ready.go:92] pod "kube-controller-manager-test-preload-171421" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:21.030626 23640 pod_ready.go:81] duration metric: took 6.0104684s waiting for pod "kube-controller-manager-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.030639 23640 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lwv2g" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.032373 23640 pod_ready.go:97] error getting pod "kube-proxy-lwv2g" in "kube-system" namespace (skipping!): pods "kube-proxy-lwv2g" not found
I1031 17:16:21.032392 23640 pod_ready.go:81] duration metric: took 1.746609ms waiting for pod "kube-proxy-lwv2g" in "kube-system" namespace to be "Ready" ...
E1031 17:16:21.032399 23640 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-lwv2g" in "kube-system" namespace (skipping!): pods "kube-proxy-lwv2g" not found
I1031 17:16:21.032405 23640 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.036874 23640 pod_ready.go:92] pod "kube-scheduler-test-preload-171421" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:21.036898 23640 pod_ready.go:81] duration metric: took 4.486246ms waiting for pod "kube-scheduler-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.036911 23640 pod_ready.go:38] duration metric: took 12.540703481s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 17:16:21.036929 23640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1031 17:16:21.051327 23640 ops.go:34] apiserver oom_adj: -16
I1031 17:16:21.051358 23640 kubeadm.go:631] restartCluster took 21.602910218s
I1031 17:16:21.051367 23640 kubeadm.go:398] StartCluster complete in 21.640534177s
I1031 17:16:21.051385 23640 settings.go:142] acquiring lock: {Name:mk8905c4e71280dfbde1bae0f7b3d451a73cf90e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 17:16:21.051525 23640 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15232-3810/kubeconfig
I1031 17:16:21.052212 23640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3810/kubeconfig: {Name:mkdc64711765e6f3233a65cab945ca056f21ca06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 17:16:21.053123 23640 kapi.go:59] client config for test-preload-171421: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1031 17:16:21.055844 23640 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-171421" rescaled to 1
I1031 17:16:21.055922 23640 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1031 17:16:21.055949 23640 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
I1031 17:16:21.056006 23640 addons.go:65] Setting storage-provisioner=true in profile "test-preload-171421"
I1031 17:16:21.055931 23640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1031 17:16:21.058421 23640 out.go:177] * Verifying Kubernetes components...
I1031 17:16:21.056045 23640 addons.go:153] Setting addon storage-provisioner=true in "test-preload-171421"
I1031 17:16:21.056091 23640 addons.go:65] Setting default-storageclass=true in profile "test-preload-171421"
I1031 17:16:21.056193 23640 config.go:180] Loaded profile config "test-preload-171421": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
W1031 17:16:21.059684 23640 addons.go:162] addon storage-provisioner should already be in state true
I1031 17:16:21.059727 23640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1031 17:16:21.059733 23640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-171421"
I1031 17:16:21.059749 23640 host.go:66] Checking if "test-preload-171421" exists ...
I1031 17:16:21.060154 23640 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1031 17:16:21.060177 23640 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1031 17:16:21.060224 23640 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 17:16:21.060308 23640 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 17:16:21.075585 23640 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:46447
I1031 17:16:21.075630 23640 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:38431
I1031 17:16:21.076011 23640 main.go:134] libmachine: () Calling .GetVersion
I1031 17:16:21.076026 23640 main.go:134] libmachine: () Calling .GetVersion
I1031 17:16:21.076503 23640 main.go:134] libmachine: Using API Version 1
I1031 17:16:21.076526 23640 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 17:16:21.076619 23640 main.go:134] libmachine: Using API Version 1
I1031 17:16:21.076643 23640 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 17:16:21.076874 23640 main.go:134] libmachine: () Calling .GetMachineName
I1031 17:16:21.077001 23640 main.go:134] libmachine: () Calling .GetMachineName
I1031 17:16:21.077178 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetState
I1031 17:16:21.077380 23640 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1031 17:16:21.077419 23640 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 17:16:21.079459 23640 kapi.go:59] client config for test-preload-171421: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/profiles/test-preload-171421/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3810/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1031 17:16:21.087743 23640 addons.go:153] Setting addon default-storageclass=true in "test-preload-171421"
W1031 17:16:21.087763 23640 addons.go:162] addon default-storageclass should already be in state true
I1031 17:16:21.087785 23640 host.go:66] Checking if "test-preload-171421" exists ...
I1031 17:16:21.088126 23640 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1031 17:16:21.088169 23640 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 17:16:21.092835 23640 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42111
I1031 17:16:21.093232 23640 main.go:134] libmachine: () Calling .GetVersion
I1031 17:16:21.093705 23640 main.go:134] libmachine: Using API Version 1
I1031 17:16:21.093730 23640 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 17:16:21.094054 23640 main.go:134] libmachine: () Calling .GetMachineName
I1031 17:16:21.094254 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetState
I1031 17:16:21.095968 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:16:21.098262 23640 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:16:21.099729 23640 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1031 17:16:21.099754 23640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1031 17:16:21.099771 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:16:21.102938 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:16:21.103455 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:16:21.103487 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:16:21.103659 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:16:21.103818 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:16:21.103984 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:16:21.104142 23640 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/machines/test-preload-171421/id_rsa Username:docker}
I1031 17:16:21.105817 23640 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43559
I1031 17:16:21.106193 23640 main.go:134] libmachine: () Calling .GetVersion
I1031 17:16:21.106655 23640 main.go:134] libmachine: Using API Version 1
I1031 17:16:21.106674 23640 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 17:16:21.107023 23640 main.go:134] libmachine: () Calling .GetMachineName
I1031 17:16:21.107533 23640 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1031 17:16:21.107567 23640 main.go:134] libmachine: Launching plugin server for driver kvm2
I1031 17:16:21.121637 23640 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:40651
I1031 17:16:21.121996 23640 main.go:134] libmachine: () Calling .GetVersion
I1031 17:16:21.122387 23640 main.go:134] libmachine: Using API Version 1
I1031 17:16:21.122413 23640 main.go:134] libmachine: () Calling .SetConfigRaw
I1031 17:16:21.122723 23640 main.go:134] libmachine: () Calling .GetMachineName
I1031 17:16:21.122901 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetState
I1031 17:16:21.124322 23640 main.go:134] libmachine: (test-preload-171421) Calling .DriverName
I1031 17:16:21.124536 23640 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I1031 17:16:21.124551 23640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1031 17:16:21.124563 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHHostname
I1031 17:16:21.127420 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:16:21.127867 23640 main.go:134] libmachine: (test-preload-171421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:5f:1f", ip: ""} in network mk-test-preload-171421: {Iface:virbr1 ExpiryTime:2022-10-31 18:14:36 +0000 UTC Type:0 Mac:52:54:00:a9:5f:1f Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-171421 Clientid:01:52:54:00:a9:5f:1f}
I1031 17:16:21.127898 23640 main.go:134] libmachine: (test-preload-171421) DBG | domain test-preload-171421 has defined IP address 192.168.39.249 and MAC address 52:54:00:a9:5f:1f in network mk-test-preload-171421
I1031 17:16:21.128042 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHPort
I1031 17:16:21.128209 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHKeyPath
I1031 17:16:21.128378 23640 main.go:134] libmachine: (test-preload-171421) Calling .GetSSHUsername
I1031 17:16:21.128518 23640 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3810/.minikube/machines/test-preload-171421/id_rsa Username:docker}
I1031 17:16:21.211359 23640 node_ready.go:35] waiting up to 6m0s for node "test-preload-171421" to be "Ready" ...
I1031 17:16:21.211711 23640 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1031 17:16:21.214170 23640 node_ready.go:49] node "test-preload-171421" has status "Ready":"True"
I1031 17:16:21.214185 23640 node_ready.go:38] duration metric: took 2.801822ms waiting for node "test-preload-171421" to be "Ready" ...
I1031 17:16:21.214191 23640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 17:16:21.219184 23640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qppww" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.232525 23640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1031 17:16:21.245263 23640 pod_ready.go:92] pod "coredns-6d4b75cb6d-qppww" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:21.245280 23640 pod_ready.go:81] duration metric: took 26.067838ms waiting for pod "coredns-6d4b75cb6d-qppww" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.245294 23640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.259455 23640 pod_ready.go:92] pod "etcd-test-preload-171421" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:21.259476 23640 pod_ready.go:81] duration metric: took 14.174687ms waiting for pod "etcd-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.259486 23640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.259493 23640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1031 17:16:21.629320 23640 pod_ready.go:92] pod "kube-apiserver-test-preload-171421" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:21.629345 23640 pod_ready.go:81] duration metric: took 369.850499ms waiting for pod "kube-apiserver-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:21.629357 23640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:22.032792 23640 pod_ready.go:92] pod "kube-controller-manager-test-preload-171421" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:22.032817 23640 pod_ready.go:81] duration metric: took 403.452631ms waiting for pod "kube-controller-manager-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:22.032843 23640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqfmx" in "kube-system" namespace to be "Ready" ...
I1031 17:16:22.363211 23640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130648583s)
I1031 17:16:22.363264 23640 main.go:134] libmachine: Making call to close driver server
I1031 17:16:22.363279 23640 main.go:134] libmachine: (test-preload-171421) Calling .Close
I1031 17:16:22.363580 23640 main.go:134] libmachine: (test-preload-171421) DBG | Closing plugin on server side
I1031 17:16:22.363619 23640 main.go:134] libmachine: Successfully made call to close driver server
I1031 17:16:22.363627 23640 main.go:134] libmachine: Making call to close connection to plugin binary
I1031 17:16:22.363635 23640 main.go:134] libmachine: Making call to close driver server
I1031 17:16:22.363644 23640 main.go:134] libmachine: (test-preload-171421) Calling .Close
I1031 17:16:22.363873 23640 main.go:134] libmachine: Successfully made call to close driver server
I1031 17:16:22.363893 23640 main.go:134] libmachine: Making call to close connection to plugin binary
I1031 17:16:22.455893 23640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196373996s)
I1031 17:16:22.455932 23640 main.go:134] libmachine: Making call to close driver server
I1031 17:16:22.455945 23640 main.go:134] libmachine: (test-preload-171421) Calling .Close
I1031 17:16:22.456228 23640 main.go:134] libmachine: (test-preload-171421) DBG | Closing plugin on server side
I1031 17:16:22.456296 23640 main.go:134] libmachine: Successfully made call to close driver server
I1031 17:16:22.456310 23640 main.go:134] libmachine: Making call to close connection to plugin binary
I1031 17:16:22.456329 23640 main.go:134] libmachine: Making call to close driver server
I1031 17:16:22.456343 23640 main.go:134] libmachine: (test-preload-171421) Calling .Close
I1031 17:16:22.456605 23640 main.go:134] libmachine: Successfully made call to close driver server
I1031 17:16:22.456629 23640 main.go:134] libmachine: Making call to close connection to plugin binary
I1031 17:16:22.456644 23640 main.go:134] libmachine: Making call to close driver server
I1031 17:16:22.456675 23640 main.go:134] libmachine: (test-preload-171421) Calling .Close
I1031 17:16:22.456613 23640 main.go:134] libmachine: (test-preload-171421) DBG | Closing plugin on server side
I1031 17:16:22.456935 23640 main.go:134] libmachine: Successfully made call to close driver server
I1031 17:16:22.456977 23640 main.go:134] libmachine: Making call to close connection to plugin binary
I1031 17:16:22.456978 23640 main.go:134] libmachine: (test-preload-171421) DBG | Closing plugin on server side
I1031 17:16:22.459058 23640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I1031 17:16:22.460418 23640 addons.go:414] enableAddons completed in 1.404470988s
I1031 17:16:22.935601 23640 pod_ready.go:92] pod "kube-proxy-bqfmx" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:22.935634 23640 pod_ready.go:81] duration metric: took 902.783217ms waiting for pod "kube-proxy-bqfmx" in "kube-system" namespace to be "Ready" ...
I1031 17:16:22.935645 23640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:23.229386 23640 pod_ready.go:92] pod "kube-scheduler-test-preload-171421" in "kube-system" namespace has status "Ready":"True"
I1031 17:16:23.229408 23640 pod_ready.go:81] duration metric: took 293.75581ms waiting for pod "kube-scheduler-test-preload-171421" in "kube-system" namespace to be "Ready" ...
I1031 17:16:23.229417 23640 pod_ready.go:38] duration metric: took 2.015217542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 17:16:23.229434 23640 api_server.go:51] waiting for apiserver process to appear ...
I1031 17:16:23.229475 23640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 17:16:23.243013 23640 api_server.go:71] duration metric: took 2.187056629s to wait for apiserver process to appear ...
I1031 17:16:23.243031 23640 api_server.go:87] waiting for apiserver healthz status ...
I1031 17:16:23.243042 23640 api_server.go:252] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
I1031 17:16:23.248052 23640 api_server.go:278] https://192.168.39.249:8443/healthz returned 200:
ok
I1031 17:16:23.249217 23640 api_server.go:140] control plane version: v1.24.6
I1031 17:16:23.249234 23640 api_server.go:130] duration metric: took 6.198285ms to wait for apiserver health ...
I1031 17:16:23.249241 23640 system_pods.go:43] waiting for kube-system pods to appear ...
I1031 17:16:23.431123 23640 system_pods.go:59] 7 kube-system pods found
I1031 17:16:23.431149 23640 system_pods.go:61] "coredns-6d4b75cb6d-qppww" [8abf149a-6d21-4361-aaee-58255cef5d37] Running
I1031 17:16:23.431154 23640 system_pods.go:61] "etcd-test-preload-171421" [2bd608e1-cc23-4ff9-8124-84f2a3b6ef0b] Running
I1031 17:16:23.431158 23640 system_pods.go:61] "kube-apiserver-test-preload-171421" [f6efc2ad-15a8-4228-a9ec-e51f2821a190] Running
I1031 17:16:23.431162 23640 system_pods.go:61] "kube-controller-manager-test-preload-171421" [53641179-fc82-443a-8809-604213cc3314] Running
I1031 17:16:23.431166 23640 system_pods.go:61] "kube-proxy-bqfmx" [551f331a-2ff3-42be-859f-9692cb10e5f4] Running
I1031 17:16:23.431170 23640 system_pods.go:61] "kube-scheduler-test-preload-171421" [221f209c-2d90-495b-9603-0f39603f8cf7] Running
I1031 17:16:23.431173 23640 system_pods.go:61] "storage-provisioner" [8801cb7c-45f6-485d-986f-2d5d6d7cd09e] Running
I1031 17:16:23.431178 23640 system_pods.go:74] duration metric: took 181.933702ms to wait for pod list to return data ...
I1031 17:16:23.431184 23640 default_sa.go:34] waiting for default service account to be created ...
I1031 17:16:23.628480 23640 default_sa.go:45] found service account: "default"
I1031 17:16:23.628498 23640 default_sa.go:55] duration metric: took 197.305855ms for default service account to be created ...
I1031 17:16:23.628505 23640 system_pods.go:116] waiting for k8s-apps to be running ...
I1031 17:16:23.831851 23640 system_pods.go:86] 7 kube-system pods found
I1031 17:16:23.831873 23640 system_pods.go:89] "coredns-6d4b75cb6d-qppww" [8abf149a-6d21-4361-aaee-58255cef5d37] Running
I1031 17:16:23.831878 23640 system_pods.go:89] "etcd-test-preload-171421" [2bd608e1-cc23-4ff9-8124-84f2a3b6ef0b] Running
I1031 17:16:23.831883 23640 system_pods.go:89] "kube-apiserver-test-preload-171421" [f6efc2ad-15a8-4228-a9ec-e51f2821a190] Running
I1031 17:16:23.831888 23640 system_pods.go:89] "kube-controller-manager-test-preload-171421" [53641179-fc82-443a-8809-604213cc3314] Running
I1031 17:16:23.831894 23640 system_pods.go:89] "kube-proxy-bqfmx" [551f331a-2ff3-42be-859f-9692cb10e5f4] Running
I1031 17:16:23.831898 23640 system_pods.go:89] "kube-scheduler-test-preload-171421" [221f209c-2d90-495b-9603-0f39603f8cf7] Running
I1031 17:16:23.831901 23640 system_pods.go:89] "storage-provisioner" [8801cb7c-45f6-485d-986f-2d5d6d7cd09e] Running
I1031 17:16:23.831909 23640 system_pods.go:126] duration metric: took 203.398037ms to wait for k8s-apps to be running ...
I1031 17:16:23.831916 23640 system_svc.go:44] waiting for kubelet service to be running ....
I1031 17:16:23.831963 23640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1031 17:16:23.845253 23640 system_svc.go:56] duration metric: took 13.333438ms WaitForService to wait for kubelet.
I1031 17:16:23.845270 23640 kubeadm.go:573] duration metric: took 2.789316319s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1031 17:16:23.845282 23640 node_conditions.go:102] verifying NodePressure condition ...
I1031 17:16:24.028307 23640 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1031 17:16:24.028349 23640 node_conditions.go:123] node cpu capacity is 2
I1031 17:16:24.028361 23640 node_conditions.go:105] duration metric: took 183.073822ms to run NodePressure ...
I1031 17:16:24.028373 23640 start.go:217] waiting for startup goroutines ...
I1031 17:16:24.028634 23640 ssh_runner.go:195] Run: rm -f paused
I1031 17:16:24.073311 23640 start.go:506] kubectl: 1.25.3, cluster: 1.24.6 (minor skew: 1)
I1031 17:16:24.076502 23640 out.go:177] * Done! kubectl is now configured to use "test-preload-171421" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
6261fee3cbc7d 0bb39497ab33b 2 seconds ago Running kube-proxy 0 bbb69b182e697
5cf38d25b6943 6e38f40d628db 15 seconds ago Running storage-provisioner 1 42979727b8565
be759b4241891 a4ca41631cc7a 16 seconds ago Running coredns 1 2d67c8387f87b
33e5d67075539 c786c777a4e1c 22 seconds ago Running kube-scheduler 0 6ee89bf9642c4
38feaf15ef627 aebe758cef4cd 22 seconds ago Running etcd 1 7d246a33609f4
1c501a7458d65 860f263331c95 23 seconds ago Running kube-apiserver 0 287ec6aea54d6
98dc6160be728 c6c20157a4233 23 seconds ago Running kube-controller-manager 0 996ed95726ee2
*
* ==> containerd <==
* -- Journal begins at Mon 2022-10-31 17:14:33 UTC, ends at Mon 2022-10-31 17:16:25 UTC. --
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.832472283Z" level=info msg="shim disconnected" id=25f2aae87150353708d908e1b4e613a457c22fcbd6f798fd43a4b4583e73c950
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.832681282Z" level=warning msg="cleaning up after shim disconnected" id=25f2aae87150353708d908e1b4e613a457c22fcbd6f798fd43a4b4583e73c950 namespace=k8s.io
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.832734181Z" level=info msg="cleaning up dead shim"
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.858670254Z" level=warning msg="cleanup warnings time=\"2022-10-31T17:16:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3715 runtime=io.containerd.runc.v2\n"
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.867544755Z" level=info msg="StopContainer for \"25f2aae87150353708d908e1b4e613a457c22fcbd6f798fd43a4b4583e73c950\" returns successfully"
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.868880377Z" level=info msg="StopPodSandbox for \"c71667e86a968d378543f1712d344a44b2da89e415b646232a98e17bb8e9a18d\""
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.869233646Z" level=info msg="Container to stop \"25f2aae87150353708d908e1b4e613a457c22fcbd6f798fd43a4b4583e73c950\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.924725292Z" level=info msg="shim disconnected" id=c71667e86a968d378543f1712d344a44b2da89e415b646232a98e17bb8e9a18d
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.924882945Z" level=warning msg="cleaning up after shim disconnected" id=c71667e86a968d378543f1712d344a44b2da89e415b646232a98e17bb8e9a18d namespace=k8s.io
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.924925439Z" level=info msg="cleaning up dead shim"
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.936951076Z" level=warning msg="cleanup warnings time=\"2022-10-31T17:16:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3750 runtime=io.containerd.runc.v2\n"
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.937391765Z" level=info msg="TearDown network for sandbox \"c71667e86a968d378543f1712d344a44b2da89e415b646232a98e17bb8e9a18d\" successfully"
Oct 31 17:16:19 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:19.937472685Z" level=info msg="StopPodSandbox for \"c71667e86a968d378543f1712d344a44b2da89e415b646232a98e17bb8e9a18d\" returns successfully"
Oct 31 17:16:20 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:20.903098219Z" level=info msg="RemoveContainer for \"25f2aae87150353708d908e1b4e613a457c22fcbd6f798fd43a4b4583e73c950\""
Oct 31 17:16:20 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:20.933350534Z" level=info msg="RemoveContainer for \"25f2aae87150353708d908e1b4e613a457c22fcbd6f798fd43a4b4583e73c950\" returns successfully"
Oct 31 17:16:21 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:21.849996556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqfmx,Uid:551f331a-2ff3-42be-859f-9692cb10e5f4,Namespace:kube-system,Attempt:0,}"
Oct 31 17:16:21 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:21.873508267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 31 17:16:21 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:21.873646495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 31 17:16:21 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:21.873668467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 31 17:16:21 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:21.873803513Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbb69b182e6975695540e13c721d5395a383aeab72dd013702708ef1a7f1e9d6 pid=3799 runtime=io.containerd.runc.v2
Oct 31 17:16:22 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:22.126777230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqfmx,Uid:551f331a-2ff3-42be-859f-9692cb10e5f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbb69b182e6975695540e13c721d5395a383aeab72dd013702708ef1a7f1e9d6\""
Oct 31 17:16:22 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:22.132365910Z" level=info msg="CreateContainer within sandbox \"bbb69b182e6975695540e13c721d5395a383aeab72dd013702708ef1a7f1e9d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Oct 31 17:16:22 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:22.180209795Z" level=info msg="CreateContainer within sandbox \"bbb69b182e6975695540e13c721d5395a383aeab72dd013702708ef1a7f1e9d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6261fee3cbc7dd2cd7ee55aae21daea5567e9aa1b0f3fd34370476f18441c411\""
Oct 31 17:16:22 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:22.182779649Z" level=info msg="StartContainer for \"6261fee3cbc7dd2cd7ee55aae21daea5567e9aa1b0f3fd34370476f18441c411\""
Oct 31 17:16:22 test-preload-171421 containerd[2458]: time="2022-10-31T17:16:22.341451976Z" level=info msg="StartContainer for \"6261fee3cbc7dd2cd7ee55aae21daea5567e9aa1b0f3fd34370476f18441c411\" returns successfully"
*
* ==> coredns [be759b4241891bd54d7bd06f68b017b73f7de9964fa57ddac5fd834f5ea88f48] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: test-preload-171421
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-171421
kubernetes.io/os=linux
minikube.k8s.io/commit=2e5adf9ee40d3190a65d3fa843a253d73ae4fdf3
minikube.k8s.io/name=test-preload-171421
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_10_31T17_15_20_0700
minikube.k8s.io/version=v1.27.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 31 Oct 2022 17:15:17 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-171421
AcquireTime: <unset>
RenewTime: Mon, 31 Oct 2022 17:16:17 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 31 Oct 2022 17:16:07 +0000 Mon, 31 Oct 2022 17:15:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 31 Oct 2022 17:16:07 +0000 Mon, 31 Oct 2022 17:15:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 31 Oct 2022 17:16:07 +0000 Mon, 31 Oct 2022 17:15:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 31 Oct 2022 17:16:07 +0000 Mon, 31 Oct 2022 17:15:31 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.249
Hostname: test-preload-171421
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 4cb183baa442496094048eea3bbc86e3
System UUID: 4cb183ba-a442-4960-9404-8eea3bbc86e3
Boot ID: 1d978092-3e09-469e-a458-36889754344a
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.8
Kubelet Version: v1.24.6
Kube-Proxy Version: v1.24.6
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-qppww 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 52s
kube-system etcd-test-preload-171421 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 64s
kube-system kube-apiserver-test-preload-171421 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18s
kube-system kube-controller-manager-test-preload-171421 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18s
kube-system kube-proxy-bqfmx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5s
kube-system kube-scheduler-test-preload-171421 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 50s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 50s kube-proxy
Normal Starting 14s kube-proxy
Normal Starting 2s kube-proxy
Normal NodeHasSufficientMemory 74s (x4 over 74s) kubelet Node test-preload-171421 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 74s (x3 over 74s) kubelet Node test-preload-171421 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 74s (x4 over 74s) kubelet Node test-preload-171421 status is now: NodeHasNoDiskPressure
Normal Starting 65s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 65s kubelet Node test-preload-171421 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 65s kubelet Node test-preload-171421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 65s kubelet Node test-preload-171421 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 64s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 54s kubelet Node test-preload-171421 status is now: NodeReady
Normal RegisteredNode 53s node-controller Node test-preload-171421 event: Registered Node test-preload-171421 in Controller
Normal NodeHasSufficientMemory 25s (x8 over 25s) kubelet Node test-preload-171421 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 25s (x8 over 25s) kubelet Node test-preload-171421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 25s (x7 over 25s) kubelet Node test-preload-171421 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 25s kubelet Updated Node Allocatable limit across pods
Normal Starting 25s kubelet Starting kubelet.
Normal RegisteredNode 6s node-controller Node test-preload-171421 event: Registered Node test-preload-171421 in Controller
*
* ==> dmesg <==
* [Oct31 17:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.069170] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.839429] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.174266] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.127094] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +5.025988] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
[ +7.831109] systemd-fstab-generator[551]: Ignoring "noauto" for root device
[ +0.102067] systemd-fstab-generator[562]: Ignoring "noauto" for root device
[ +0.193536] systemd-fstab-generator[585]: Ignoring "noauto" for root device
[Oct31 17:15] systemd-fstab-generator[990]: Ignoring "noauto" for root device
[ +9.716954] systemd-fstab-generator[1379]: Ignoring "noauto" for root device
[ +14.247560] kauditd_printk_skb: 7 callbacks suppressed
[ +7.443188] kauditd_printk_skb: 22 callbacks suppressed
[ +12.247121] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
[ +0.210896] systemd-fstab-generator[2382]: Ignoring "noauto" for root device
[ +0.141163] systemd-fstab-generator[2394]: Ignoring "noauto" for root device
[ +0.308356] systemd-fstab-generator[2440]: Ignoring "noauto" for root device
[ +2.990877] kauditd_printk_skb: 2 callbacks suppressed
[ +1.061453] overlayfs: failed to resolve '/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/24/fs': -2
[ +1.302444] systemd-fstab-generator[3026]: Ignoring "noauto" for root device
*
* ==> etcd [38feaf15ef62797da457ea4324768478d972bad2cec0f8420b92fd87b58f744e] <==
* {"level":"info","ts":"2022-10-31T17:16:03.105Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-10-31T17:16:03.106Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.249:2380"}
{"level":"info","ts":"2022-10-31T17:16:03.107Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.249:2380"}
{"level":"info","ts":"2022-10-31T17:16:03.099Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-10-31T17:16:04.474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
{"level":"info","ts":"2022-10-31T17:16:04.474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
{"level":"info","ts":"2022-10-31T17:16:04.474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
{"level":"info","ts":"2022-10-31T17:16:04.475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became candidate at term 3"}
{"level":"info","ts":"2022-10-31T17:16:04.475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgVoteResp from 318ee90c3446d547 at term 3"}
{"level":"info","ts":"2022-10-31T17:16:04.475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became leader at term 3"}
{"level":"info","ts":"2022-10-31T17:16:04.475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 318ee90c3446d547 elected leader 318ee90c3446d547 at term 3"}
{"level":"info","ts":"2022-10-31T17:16:04.477Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:test-preload-171421 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","cluster-id":"ba21282e7acd13d6","publish-timeout":"7s"}
{"level":"info","ts":"2022-10-31T17:16:04.477Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T17:16:04.478Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-10-31T17:16:04.480Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-10-31T17:16:04.489Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.249:2379"}
{"level":"info","ts":"2022-10-31T17:16:04.491Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-10-31T17:16:04.491Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"warn","ts":"2022-10-31T17:16:14.149Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.509822ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15368397591008543484 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.172336d383c14bbf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.172336d383c14bbf\" value_size:595 lease:6145025554153767322 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2022-10-31T17:16:14.150Z","caller":"traceutil/trace.go:171","msg":"trace[847241518] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:523; }","duration":"195.062344ms","start":"2022-10-31T17:16:13.955Z","end":"2022-10-31T17:16:14.150Z","steps":["trace[847241518] 'read index received' (duration: 71.378212ms)","trace[847241518] 'applied index is now lower than readState.Index' (duration: 123.678094ms)"],"step_count":2}
{"level":"info","ts":"2022-10-31T17:16:14.150Z","caller":"traceutil/trace.go:171","msg":"trace[1341297229] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"229.913656ms","start":"2022-10-31T17:16:13.920Z","end":"2022-10-31T17:16:14.150Z","steps":["trace[1341297229] 'process raft request' (duration: 106.170618ms)","trace[1341297229] 'compare' (duration: 122.420949ms)"],"step_count":2}
{"level":"warn","ts":"2022-10-31T17:16:14.151Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"195.507441ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-10-31T17:16:14.151Z","caller":"traceutil/trace.go:171","msg":"trace[1625436096] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:487; }","duration":"195.75798ms","start":"2022-10-31T17:16:13.955Z","end":"2022-10-31T17:16:14.151Z","steps":["trace[1625436096] 'agreement among raft nodes before linearized reading' (duration: 195.198083ms)"],"step_count":1}
{"level":"warn","ts":"2022-10-31T17:16:14.151Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"139.455014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-test-preload-171421\" ","response":"range_response_count:1 size:6779"}
{"level":"info","ts":"2022-10-31T17:16:14.152Z","caller":"traceutil/trace.go:171","msg":"trace[944264784] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-test-preload-171421; range_end:; response_count:1; response_revision:487; }","duration":"140.265148ms","start":"2022-10-31T17:16:14.012Z","end":"2022-10-31T17:16:14.152Z","steps":["trace[944264784] 'agreement among raft nodes before linearized reading' (duration: 139.156728ms)"],"step_count":1}
*
* ==> kernel <==
* 17:16:25 up 1 min, 0 users, load average: 2.12, 0.63, 0.22
Linux test-preload-171421 5.10.57 #1 SMP Fri Oct 28 21:02:11 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [1c501a7458d654503f8cc1fd185f2076e224dbb1a39efc5845409d1ff0607ab4] <==
* I1031 17:16:07.019803 1 establishing_controller.go:76] Starting EstablishingController
I1031 17:16:07.020050 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1031 17:16:07.112907 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1031 17:16:07.112939 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I1031 17:16:07.020068 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
E1031 17:16:07.160792 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1031 17:16:07.213953 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1031 17:16:07.222000 1 apf_controller.go:322] Running API Priority and Fairness config worker
I1031 17:16:07.222495 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1031 17:16:07.223215 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1031 17:16:07.228820 1 cache.go:39] Caches are synced for autoregister controller
I1031 17:16:07.229609 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1031 17:16:07.246680 1 shared_informer.go:262] Caches are synced for node_authorizer
I1031 17:16:07.252506 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I1031 17:16:07.677630 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1031 17:16:08.024350 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1031 17:16:08.416073 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I1031 17:16:08.423656 1 controller.go:611] quota admission added evaluator for: deployments.apps
I1031 17:16:08.458278 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I1031 17:16:08.475973 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1031 17:16:08.481755 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1031 17:16:11.099393 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I1031 17:16:19.660706 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I1031 17:16:19.709795 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1031 17:16:19.909568 1 controller.go:611] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [98dc6160be72864005af7945b876234f5510ad187d2a09bb5c66bff0682a9cc6] <==
* I1031 17:16:19.676409 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-lwv2g"
I1031 17:16:19.678723 1 shared_informer.go:262] Caches are synced for HPA
I1031 17:16:19.685725 1 shared_informer.go:262] Caches are synced for PV protection
I1031 17:16:19.686232 1 shared_informer.go:262] Caches are synced for job
I1031 17:16:19.688249 1 shared_informer.go:262] Caches are synced for endpoint_slice
I1031 17:16:19.689213 1 shared_informer.go:262] Caches are synced for cronjob
I1031 17:16:19.691335 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I1031 17:16:19.693992 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I1031 17:16:19.695368 1 shared_informer.go:262] Caches are synced for ephemeral
I1031 17:16:19.697408 1 shared_informer.go:262] Caches are synced for persistent volume
I1031 17:16:19.701209 1 shared_informer.go:262] Caches are synced for PVC protection
I1031 17:16:19.712090 1 shared_informer.go:262] Caches are synced for ReplicationController
I1031 17:16:19.767962 1 shared_informer.go:262] Caches are synced for attach detach
I1031 17:16:19.768046 1 shared_informer.go:262] Caches are synced for disruption
I1031 17:16:19.768063 1 disruption.go:371] Sending events to api server.
I1031 17:16:19.768203 1 shared_informer.go:262] Caches are synced for ReplicaSet
I1031 17:16:19.787047 1 shared_informer.go:262] Caches are synced for deployment
I1031 17:16:19.840919 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I1031 17:16:19.895836 1 shared_informer.go:262] Caches are synced for resource quota
I1031 17:16:19.900512 1 shared_informer.go:262] Caches are synced for endpoint
I1031 17:16:19.928052 1 shared_informer.go:262] Caches are synced for resource quota
I1031 17:16:20.348735 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 17:16:20.380476 1 shared_informer.go:262] Caches are synced for garbage collector
I1031 17:16:20.380667 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1031 17:16:20.921958 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bqfmx"
*
* ==> kube-proxy [6261fee3cbc7dd2cd7ee55aae21daea5567e9aa1b0f3fd34370476f18441c411] <==
* I1031 17:16:22.453645 1 node.go:163] Successfully retrieved node IP: 192.168.39.249
I1031 17:16:22.453742 1 server_others.go:138] "Detected node IP" address="192.168.39.249"
I1031 17:16:22.453926 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1031 17:16:22.503959 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I1031 17:16:22.503999 1 server_others.go:206] "Using iptables Proxier"
I1031 17:16:22.504392 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1031 17:16:22.504679 1 server.go:661] "Version info" version="v1.24.6"
I1031 17:16:22.504716 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 17:16:22.508265 1 config.go:317] "Starting service config controller"
I1031 17:16:22.508452 1 shared_informer.go:255] Waiting for caches to sync for service config
I1031 17:16:22.508500 1 config.go:226] "Starting endpoint slice config controller"
I1031 17:16:22.508506 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1031 17:16:22.509710 1 config.go:444] "Starting node config controller"
I1031 17:16:22.509818 1 shared_informer.go:255] Waiting for caches to sync for node config
I1031 17:16:22.609394 1 shared_informer.go:262] Caches are synced for endpoint slice config
I1031 17:16:22.609449 1 shared_informer.go:262] Caches are synced for service config
I1031 17:16:22.609887 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [33e5d67075539e5f535b4eaf09639daab7a9016e3335760c69ed4d71e41d696d] <==
* I1031 17:16:04.296011 1 serving.go:348] Generated self-signed cert in-memory
W1031 17:16:07.056071 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1031 17:16:07.072351 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1031 17:16:07.073217 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1031 17:16:07.073560 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1031 17:16:07.164056 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.6"
I1031 17:16:07.164228 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 17:16:07.168437 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1031 17:16:07.168642 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1031 17:16:07.170915 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1031 17:16:07.171856 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1031 17:16:07.269705 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Mon 2022-10-31 17:14:33 UTC, ends at Mon 2022-10-31 17:16:25 UTC. --
Oct 31 17:16:10 test-preload-171421 kubelet[3032]: I1031 17:16:10.780473 3032 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7a767c1ad58b1069320a96ed45b9de10 path="/var/lib/kubelet/pods/7a767c1ad58b1069320a96ed45b9de10/volumes"
Oct 31 17:16:10 test-preload-171421 kubelet[3032]: I1031 17:16:10.782269 3032 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b86573df7702120ca65533a757244479 path="/var/lib/kubelet/pods/b86573df7702120ca65533a757244479/volumes"
Oct 31 17:16:15 test-preload-171421 kubelet[3032]: I1031 17:16:15.628447 3032 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.077583 3032 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-lib-modules\") pod \"b3f3e853-9f68-44d9-8c2a-e0335d2c7a19\" (UID: \"b3f3e853-9f68-44d9-8c2a-e0335d2c7a19\") "
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.077659 3032 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-kube-proxy\") pod \"b3f3e853-9f68-44d9-8c2a-e0335d2c7a19\" (UID: \"b3f3e853-9f68-44d9-8c2a-e0335d2c7a19\") "
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.077688 3032 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gtpc\" (UniqueName: \"kubernetes.io/projected/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-kube-api-access-9gtpc\") pod \"b3f3e853-9f68-44d9-8c2a-e0335d2c7a19\" (UID: \"b3f3e853-9f68-44d9-8c2a-e0335d2c7a19\") "
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.077707 3032 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-xtables-lock\") pod \"b3f3e853-9f68-44d9-8c2a-e0335d2c7a19\" (UID: \"b3f3e853-9f68-44d9-8c2a-e0335d2c7a19\") "
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.077774 3032 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b3f3e853-9f68-44d9-8c2a-e0335d2c7a19" (UID: "b3f3e853-9f68-44d9-8c2a-e0335d2c7a19"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.077809 3032 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b3f3e853-9f68-44d9-8c2a-e0335d2c7a19" (UID: "b3f3e853-9f68-44d9-8c2a-e0335d2c7a19"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: W1031 17:16:20.078779 3032 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19/volumes/kubernetes.io~configmap/kube-proxy: clearQuota called, but quotas disabled
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.079289 3032 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "b3f3e853-9f68-44d9-8c2a-e0335d2c7a19" (UID: "b3f3e853-9f68-44d9-8c2a-e0335d2c7a19"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.086334 3032 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-kube-api-access-9gtpc" (OuterVolumeSpecName: "kube-api-access-9gtpc") pod "b3f3e853-9f68-44d9-8c2a-e0335d2c7a19" (UID: "b3f3e853-9f68-44d9-8c2a-e0335d2c7a19"). InnerVolumeSpecName "kube-api-access-9gtpc". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.178893 3032 reconciler.go:384] "Volume detached for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-kube-proxy\") on node \"test-preload-171421\" DevicePath \"\""
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.178918 3032 reconciler.go:384] "Volume detached for volume \"kube-api-access-9gtpc\" (UniqueName: \"kubernetes.io/projected/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-kube-api-access-9gtpc\") on node \"test-preload-171421\" DevicePath \"\""
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.178928 3032 reconciler.go:384] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-xtables-lock\") on node \"test-preload-171421\" DevicePath \"\""
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.178938 3032 reconciler.go:384] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19-lib-modules\") on node \"test-preload-171421\" DevicePath \"\""
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.881454 3032 scope.go:110] "RemoveContainer" containerID="25f2aae87150353708d908e1b4e613a457c22fcbd6f798fd43a4b4583e73c950"
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.938048 3032 topology_manager.go:200] "Topology Admit Handler"
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: E1031 17:16:20.938205 3032 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="b3f3e853-9f68-44d9-8c2a-e0335d2c7a19" containerName="kube-proxy"
Oct 31 17:16:20 test-preload-171421 kubelet[3032]: I1031 17:16:20.938239 3032 memory_manager.go:345] "RemoveStaleState removing state" podUID="b3f3e853-9f68-44d9-8c2a-e0335d2c7a19" containerName="kube-proxy"
Oct 31 17:16:21 test-preload-171421 kubelet[3032]: I1031 17:16:21.085050 3032 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/551f331a-2ff3-42be-859f-9692cb10e5f4-xtables-lock\") pod \"kube-proxy-bqfmx\" (UID: \"551f331a-2ff3-42be-859f-9692cb10e5f4\") " pod="kube-system/kube-proxy-bqfmx"
Oct 31 17:16:21 test-preload-171421 kubelet[3032]: I1031 17:16:21.085099 3032 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/551f331a-2ff3-42be-859f-9692cb10e5f4-kube-proxy\") pod \"kube-proxy-bqfmx\" (UID: \"551f331a-2ff3-42be-859f-9692cb10e5f4\") " pod="kube-system/kube-proxy-bqfmx"
Oct 31 17:16:21 test-preload-171421 kubelet[3032]: I1031 17:16:21.085837 3032 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lx4\" (UniqueName: \"kubernetes.io/projected/551f331a-2ff3-42be-859f-9692cb10e5f4-kube-api-access-k4lx4\") pod \"kube-proxy-bqfmx\" (UID: \"551f331a-2ff3-42be-859f-9692cb10e5f4\") " pod="kube-system/kube-proxy-bqfmx"
Oct 31 17:16:21 test-preload-171421 kubelet[3032]: I1031 17:16:21.085907 3032 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/551f331a-2ff3-42be-859f-9692cb10e5f4-lib-modules\") pod \"kube-proxy-bqfmx\" (UID: \"551f331a-2ff3-42be-859f-9692cb10e5f4\") " pod="kube-system/kube-proxy-bqfmx"
Oct 31 17:16:22 test-preload-171421 kubelet[3032]: I1031 17:16:22.780065 3032 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b3f3e853-9f68-44d9-8c2a-e0335d2c7a19 path="/var/lib/kubelet/pods/b3f3e853-9f68-44d9-8c2a-e0335d2c7a19/volumes"
*
* ==> storage-provisioner [5cf38d25b6943e6e757f64468143f555cb2932efbce128f4cf261a51cdfa428d] <==
* I1031 17:16:09.420529 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1031 17:16:09.441045 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1031 17:16:09.441681 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-171421 -n test-preload-171421
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-171421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context test-preload-171421 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context test-preload-171421 describe pod : exit status 1 (44.252393ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context test-preload-171421 describe pod : exit status 1
helpers_test.go:175: Cleaning up "test-preload-171421" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-171421
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-171421: (1.201480221s)
--- FAIL: TestPreload (125.48s)