=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-232607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E1101 23:26:28.341609 10671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/functional-225352/client.crt: no such file or directory
E1101 23:27:48.193469 10671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/addons-224602/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-232607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (1m57.511268029s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-232607 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-232607 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.357675375s)
preload_test.go:67: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-232607 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.6
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-232607 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.6: (50.577659895s)
preload_test.go:76: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-232607 -- sudo crictl image ls
preload_test.go:81: Expected to find gcr.io/k8s-minikube/busybox in output of `docker images`, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.6 860f263331c95 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.6 c6c20157a4233 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-proxy v1.24.6 0bb39497ab33b 39.5MB
k8s.gcr.io/kube-scheduler v1.24.6 c786c777a4e1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2022-11-01 23:28:57.980196025 +0000 UTC m=+2623.844744476
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-232607 -n test-preload-232607
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-232607 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-232607 logs -n 25: (1.105386431s)
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| cp | multinode-230419 cp multinode-230419-m03:/home/docker/cp-test.txt | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| | multinode-230419:/home/docker/cp-test_multinode-230419-m03_multinode-230419.txt | | | | | |
| ssh | multinode-230419 ssh -n | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| | multinode-230419-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-230419 ssh -n multinode-230419 sudo cat | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| | /home/docker/cp-test_multinode-230419-m03_multinode-230419.txt | | | | | |
| cp | multinode-230419 cp multinode-230419-m03:/home/docker/cp-test.txt | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| | multinode-230419-m02:/home/docker/cp-test_multinode-230419-m03_multinode-230419-m02.txt | | | | | |
| ssh | multinode-230419 ssh -n | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| | multinode-230419-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-230419 ssh -n multinode-230419-m02 sudo cat | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| | /home/docker/cp-test_multinode-230419-m03_multinode-230419-m02.txt | | | | | |
| node | multinode-230419 node stop m03 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| node | multinode-230419 node start | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-230419 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | |
| stop | -p multinode-230419 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:11 UTC |
| start | -p multinode-230419 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:11 UTC | 01 Nov 22 23:18 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-230419 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:18 UTC | |
| node | multinode-230419 node delete | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:18 UTC | 01 Nov 22 23:18 UTC |
| | m03 | | | | | |
| stop | multinode-230419 stop | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:18 UTC | 01 Nov 22 23:21 UTC |
| start | -p multinode-230419 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | 01 Nov 22 23:25 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-230419 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:25 UTC | |
| start | -p multinode-230419-m02 | multinode-230419-m02 | jenkins | v1.27.1 | 01 Nov 22 23:25 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-230419-m03 | multinode-230419-m03 | jenkins | v1.27.1 | 01 Nov 22 23:25 UTC | 01 Nov 22 23:26 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-230419 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:26 UTC | |
| delete | -p multinode-230419-m03 | multinode-230419-m03 | jenkins | v1.27.1 | 01 Nov 22 23:26 UTC | 01 Nov 22 23:26 UTC |
| delete | -p multinode-230419 | multinode-230419 | jenkins | v1.27.1 | 01 Nov 22 23:26 UTC | 01 Nov 22 23:26 UTC |
| start | -p test-preload-232607 | test-preload-232607 | jenkins | v1.27.1 | 01 Nov 22 23:26 UTC | 01 Nov 22 23:28 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-232607 | test-preload-232607 | jenkins | v1.27.1 | 01 Nov 22 23:28 UTC | 01 Nov 22 23:28 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| start | -p test-preload-232607 | test-preload-232607 | jenkins | v1.27.1 | 01 Nov 22 23:28 UTC | 01 Nov 22 23:28 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.6 | | | | | |
| ssh | -p test-preload-232607 -- sudo | test-preload-232607 | jenkins | v1.27.1 | 01 Nov 22 23:28 UTC | 01 Nov 22 23:28 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/11/01 23:28:07
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1101 23:28:07.212406 24185 out.go:296] Setting OutFile to fd 1 ...
I1101 23:28:07.212517 24185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:28:07.212527 24185 out.go:309] Setting ErrFile to fd 2...
I1101 23:28:07.212532 24185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:28:07.212635 24185 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3880/.minikube/bin
I1101 23:28:07.213153 24185 out.go:303] Setting JSON to false
I1101 23:28:07.214037 24185 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4241,"bootTime":1667341046,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 23:28:07.214103 24185 start.go:126] virtualization: kvm guest
I1101 23:28:07.216542 24185 out.go:177] * [test-preload-232607] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1101 23:28:07.218043 24185 out.go:177] - MINIKUBE_LOCATION=15232
I1101 23:28:07.217984 24185 notify.go:220] Checking for updates...
I1101 23:28:07.220784 24185 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 23:28:07.222352 24185 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-3880/kubeconfig
I1101 23:28:07.223936 24185 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3880/.minikube
I1101 23:28:07.225431 24185 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 23:28:07.227176 24185 config.go:180] Loaded profile config "test-preload-232607": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I1101 23:28:07.227602 24185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1101 23:28:07.227673 24185 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:28:07.243385 24185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34017
I1101 23:28:07.243768 24185 main.go:134] libmachine: () Calling .GetVersion
I1101 23:28:07.244259 24185 main.go:134] libmachine: Using API Version 1
I1101 23:28:07.244288 24185 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:28:07.244627 24185 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:28:07.244787 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:07.246798 24185 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I1101 23:28:07.248242 24185 driver.go:365] Setting default libvirt URI to qemu:///system
I1101 23:28:07.248528 24185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1101 23:28:07.248557 24185 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:28:07.263008 24185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:41383
I1101 23:28:07.263354 24185 main.go:134] libmachine: () Calling .GetVersion
I1101 23:28:07.263812 24185 main.go:134] libmachine: Using API Version 1
I1101 23:28:07.263834 24185 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:28:07.264149 24185 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:28:07.264322 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:07.299043 24185 out.go:177] * Using the kvm2 driver based on existing profile
I1101 23:28:07.300615 24185 start.go:282] selected driver: kvm2
I1101 23:28:07.300635 24185 start.go:808] validating driver "kvm2" against &{Name:test-preload-232607 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-232607 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:28:07.300747 24185 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 23:28:07.301318 24185 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 23:28:07.301530 24185 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15232-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1101 23:28:07.316234 24185 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.27.1
I1101 23:28:07.316558 24185 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 23:28:07.316587 24185 cni.go:95] Creating CNI manager for ""
I1101 23:28:07.316600 24185 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I1101 23:28:07.316616 24185 start_flags.go:317] config:
{Name:test-preload-232607 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-232607 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:28:07.316727 24185 iso.go:124] acquiring lock: {Name:mk61da6294d889ca01904568b86a25258a195040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 23:28:07.318746 24185 out.go:177] * Starting control plane node test-preload-232607 in cluster test-preload-232607
I1101 23:28:07.320216 24185 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:28:07.476338 24185 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1101 23:28:07.476363 24185 cache.go:57] Caching tarball of preloaded images
I1101 23:28:07.476531 24185 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:28:07.478368 24185 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I1101 23:28:07.479699 24185 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:28:07.628951 24185 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1101 23:28:19.393848 24185 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:28:19.393939 24185 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:28:20.270069 24185 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I1101 23:28:20.270201 24185 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/config.json ...
I1101 23:28:20.270371 24185 cache.go:208] Successfully downloaded all kic artifacts
I1101 23:28:20.270399 24185 start.go:364] acquiring machines lock for test-preload-232607: {Name:mk0ceb2a54ab19909097a3b5db611a9647e1a223 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1101 23:28:20.270510 24185 start.go:368] acquired machines lock for "test-preload-232607" in 96.649µs
I1101 23:28:20.270530 24185 start.go:96] Skipping create...Using existing machine configuration
I1101 23:28:20.270534 24185 fix.go:55] fixHost starting:
I1101 23:28:20.270805 24185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1101 23:28:20.270845 24185 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:28:20.285509 24185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45675
I1101 23:28:20.285886 24185 main.go:134] libmachine: () Calling .GetVersion
I1101 23:28:20.286341 24185 main.go:134] libmachine: Using API Version 1
I1101 23:28:20.286367 24185 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:28:20.286699 24185 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:28:20.286884 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:20.287030 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetState
I1101 23:28:20.288684 24185 fix.go:103] recreateIfNeeded on test-preload-232607: state=Running err=<nil>
W1101 23:28:20.288724 24185 fix.go:129] unexpected machine state, will restart: <nil>
I1101 23:28:20.290845 24185 out.go:177] * Updating the running kvm2 "test-preload-232607" VM ...
I1101 23:28:20.292234 24185 machine.go:88] provisioning docker machine ...
I1101 23:28:20.292263 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:20.292444 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetMachineName
I1101 23:28:20.292629 24185 buildroot.go:166] provisioning hostname "test-preload-232607"
I1101 23:28:20.292646 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetMachineName
I1101 23:28:20.292782 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:20.295213 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.295634 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:20.295662 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.295925 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:20.296099 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:20.296259 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:20.296430 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:20.296602 24185 main.go:134] libmachine: Using SSH client type: native
I1101 23:28:20.296783 24185 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.39.211 22 <nil> <nil>}
I1101 23:28:20.296806 24185 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-232607 && echo "test-preload-232607" | sudo tee /etc/hostname
I1101 23:28:20.445216 24185 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-232607
I1101 23:28:20.445258 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:20.448292 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.448714 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:20.448753 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.448886 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:20.449108 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:20.449282 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:20.449434 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:20.449620 24185 main.go:134] libmachine: Using SSH client type: native
I1101 23:28:20.449760 24185 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.39.211 22 <nil> <nil>}
I1101 23:28:20.449779 24185 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-232607' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-232607/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-232607' | sudo tee -a /etc/hosts;
fi
fi
I1101 23:28:20.583316 24185 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1101 23:28:20.583350 24185 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3880/.minikube}
I1101 23:28:20.583375 24185 buildroot.go:174] setting up certificates
I1101 23:28:20.583387 24185 provision.go:83] configureAuth start
I1101 23:28:20.583399 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetMachineName
I1101 23:28:20.583665 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetIP
I1101 23:28:20.586367 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.586753 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:20.586784 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.586975 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:20.589086 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.589379 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:20.589402 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.589552 24185 provision.go:138] copyHostCerts
I1101 23:28:20.589602 24185 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3880/.minikube/cert.pem, removing ...
I1101 23:28:20.589614 24185 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3880/.minikube/cert.pem
I1101 23:28:20.589677 24185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3880/.minikube/cert.pem (1123 bytes)
I1101 23:28:20.589755 24185 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3880/.minikube/key.pem, removing ...
I1101 23:28:20.589764 24185 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3880/.minikube/key.pem
I1101 23:28:20.589792 24185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3880/.minikube/key.pem (1679 bytes)
I1101 23:28:20.589840 24185 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3880/.minikube/ca.pem, removing ...
I1101 23:28:20.589847 24185 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3880/.minikube/ca.pem
I1101 23:28:20.589876 24185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3880/.minikube/ca.pem (1082 bytes)
I1101 23:28:20.589934 24185 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3880/.minikube/certs/ca-key.pem org=jenkins.test-preload-232607 san=[192.168.39.211 192.168.39.211 localhost 127.0.0.1 minikube test-preload-232607]
I1101 23:28:20.719840 24185 provision.go:172] copyRemoteCerts
I1101 23:28:20.719899 24185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 23:28:20.719938 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:20.722488 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.722799 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:20.722833 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.723036 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:20.723307 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:20.723466 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:20.723610 24185 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/machines/test-preload-232607/id_rsa Username:docker}
I1101 23:28:20.824746 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 23:28:20.870688 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1101 23:28:20.898469 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1101 23:28:20.928057 24185 provision.go:86] duration metric: configureAuth took 344.658147ms
I1101 23:28:20.928082 24185 buildroot.go:189] setting minikube options for container-runtime
I1101 23:28:20.928232 24185 config.go:180] Loaded profile config "test-preload-232607": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1101 23:28:20.928243 24185 machine.go:91] provisioned docker machine in 635.991918ms
I1101 23:28:20.928250 24185 start.go:300] post-start starting for "test-preload-232607" (driver="kvm2")
I1101 23:28:20.928257 24185 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 23:28:20.928278 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:20.928572 24185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 23:28:20.928604 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:20.931296 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.931676 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:20.931706 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:20.931857 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:20.932080 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:20.932247 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:20.932381 24185 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/machines/test-preload-232607/id_rsa Username:docker}
I1101 23:28:21.032471 24185 ssh_runner.go:195] Run: cat /etc/os-release
I1101 23:28:21.037720 24185 info.go:137] Remote host: Buildroot 2021.02.12
I1101 23:28:21.037747 24185 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3880/.minikube/addons for local assets ...
I1101 23:28:21.037867 24185 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3880/.minikube/files for local assets ...
I1101 23:28:21.038012 24185 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3880/.minikube/files/etc/ssl/certs/106712.pem -> 106712.pem in /etc/ssl/certs
I1101 23:28:21.038121 24185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 23:28:21.047442 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/files/etc/ssl/certs/106712.pem --> /etc/ssl/certs/106712.pem (1708 bytes)
I1101 23:28:21.071873 24185 start.go:303] post-start completed in 143.611895ms
I1101 23:28:21.071895 24185 fix.go:57] fixHost completed within 801.359605ms
I1101 23:28:21.071919 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:21.074560 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:21.074830 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:21.074861 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:21.075018 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:21.075217 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:21.075381 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:21.075523 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:21.075659 24185 main.go:134] libmachine: Using SSH client type: native
I1101 23:28:21.075812 24185 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 192.168.39.211 22 <nil> <nil>}
I1101 23:28:21.075828 24185 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1101 23:28:21.207571 24185 main.go:134] libmachine: SSH cmd err, output: <nil>: 1667345301.204118009
I1101 23:28:21.207593 24185 fix.go:207] guest clock: 1667345301.204118009
I1101 23:28:21.207607 24185 fix.go:220] Guest: 2022-11-01 23:28:21.204118009 +0000 UTC Remote: 2022-11-01 23:28:21.071899391 +0000 UTC m=+13.919699239 (delta=132.218618ms)
I1101 23:28:21.207633 24185 fix.go:191] guest clock delta is within tolerance: 132.218618ms
I1101 23:28:21.207641 24185 start.go:83] releasing machines lock for "test-preload-232607", held for 937.117174ms
I1101 23:28:21.207688 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:21.207954 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetIP
I1101 23:28:21.210596 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:21.210984 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:21.211015 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:21.211204 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:21.211704 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:21.211897 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:21.211982 24185 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1101 23:28:21.212027 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:21.212144 24185 ssh_runner.go:195] Run: systemctl --version
I1101 23:28:21.212164 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:21.214623 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:21.214932 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:21.214966 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:21.214985 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:21.215120 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:21.215293 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:21.215434 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:21.215459 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:21.215474 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:21.215630 24185 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/machines/test-preload-232607/id_rsa Username:docker}
I1101 23:28:21.215701 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:21.215856 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:21.216050 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:21.216204 24185 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/machines/test-preload-232607/id_rsa Username:docker}
I1101 23:28:21.309157 24185 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:28:21.309287 24185 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 23:28:21.475779 24185 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I1101 23:28:21.475836 24185 ssh_runner.go:195] Run: which lz4
I1101 23:28:21.480231 24185 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I1101 23:28:21.484531 24185 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1101 23:28:21.484560 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I1101 23:28:23.766043 24185 containerd.go:496] Took 2.285842 seconds to copy over tarball
I1101 23:28:23.766098 24185 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1101 23:28:27.068692 24185 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.302567702s)
I1101 23:28:27.068723 24185 containerd.go:503] Took 3.302656 seconds t extract the tarball
I1101 23:28:27.068735 24185 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1101 23:28:27.113734 24185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:28:27.257682 24185 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 23:28:27.480353 24185 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1101 23:28:27.497788 24185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 23:28:27.512540 24185 docker.go:189] disabling docker service ...
I1101 23:28:27.512590 24185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1101 23:28:27.528546 24185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1101 23:28:27.543027 24185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1101 23:28:27.703269 24185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1101 23:28:27.836773 24185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1101 23:28:27.851762 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 23:28:27.882545 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I1101 23:28:27.893450 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I1101 23:28:27.906659 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I1101 23:28:27.916638 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
I1101 23:28:27.926754 24185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 23:28:27.936214 24185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 23:28:27.945256 24185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:28:28.088589 24185 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 23:28:28.261085 24185 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I1101 23:28:28.261157 24185 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1101 23:28:28.309516 24185 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I1101 23:28:29.414415 24185 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1101 23:28:29.419891 24185 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I1101 23:28:31.580936 24185 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1101 23:28:31.586405 24185 start.go:472] Will wait 60s for crictl version
I1101 23:28:31.586475 24185 ssh_runner.go:195] Run: sudo crictl version
I1101 23:28:31.666033 24185 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.8
RuntimeApiVersion: v1alpha2
I1101 23:28:31.666094 24185 ssh_runner.go:195] Run: containerd --version
I1101 23:28:31.698896 24185 ssh_runner.go:195] Run: containerd --version
I1101 23:28:31.743887 24185 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.8 ...
I1101 23:28:31.745453 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetIP
I1101 23:28:31.748168 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:31.748554 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:31.748580 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:31.748781 24185 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1101 23:28:31.753320 24185 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:28:31.753383 24185 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 23:28:31.786106 24185 containerd.go:553] all images are preloaded for containerd runtime.
I1101 23:28:31.786125 24185 containerd.go:467] Images already preloaded, skipping extraction
I1101 23:28:31.786165 24185 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 23:28:31.817146 24185 containerd.go:553] all images are preloaded for containerd runtime.
I1101 23:28:31.817164 24185 cache_images.go:84] Images are preloaded, skipping loading
I1101 23:28:31.817207 24185 ssh_runner.go:195] Run: sudo crictl info
I1101 23:28:31.868666 24185 cni.go:95] Creating CNI manager for ""
I1101 23:28:31.868694 24185 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I1101 23:28:31.868705 24185 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1101 23:28:31.868724 24185 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-232607 NodeName:test-preload-232607 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1101 23:28:31.868897 24185 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.211
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-232607"
kubeletExtraArgs:
node-ip: 192.168.39.211
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 23:28:31.868999 24185 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-232607 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-232607 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1101 23:28:31.869055 24185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I1101 23:28:31.882783 24185 binaries.go:44] Found k8s binaries, skipping transfer
I1101 23:28:31.882841 24185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 23:28:31.900419 24185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (514 bytes)
I1101 23:28:31.930928 24185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 23:28:31.957006 24185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
I1101 23:28:31.986267 24185 ssh_runner.go:195] Run: grep 192.168.39.211 control-plane.minikube.internal$ /etc/hosts
I1101 23:28:31.990519 24185 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607 for IP: 192.168.39.211
I1101 23:28:31.990646 24185 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3880/.minikube/ca.key
I1101 23:28:31.990698 24185 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3880/.minikube/proxy-client-ca.key
I1101 23:28:31.990786 24185 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/client.key
I1101 23:28:31.990859 24185 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/apiserver.key.5ce1f1d4
I1101 23:28:31.990909 24185 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/proxy-client.key
I1101 23:28:31.991026 24185 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3880/.minikube/certs/home/jenkins/minikube-integration/15232-3880/.minikube/certs/10671.pem (1338 bytes)
W1101 23:28:31.991067 24185 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3880/.minikube/certs/home/jenkins/minikube-integration/15232-3880/.minikube/certs/10671_empty.pem, impossibly tiny 0 bytes
I1101 23:28:31.991085 24185 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3880/.minikube/certs/home/jenkins/minikube-integration/15232-3880/.minikube/certs/ca-key.pem (1675 bytes)
I1101 23:28:31.991134 24185 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3880/.minikube/certs/home/jenkins/minikube-integration/15232-3880/.minikube/certs/ca.pem (1082 bytes)
I1101 23:28:31.991171 24185 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3880/.minikube/certs/home/jenkins/minikube-integration/15232-3880/.minikube/certs/cert.pem (1123 bytes)
I1101 23:28:31.991203 24185 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3880/.minikube/certs/home/jenkins/minikube-integration/15232-3880/.minikube/certs/key.pem (1679 bytes)
I1101 23:28:31.991259 24185 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3880/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3880/.minikube/files/etc/ssl/certs/106712.pem (1708 bytes)
I1101 23:28:31.992046 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1101 23:28:32.025807 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1101 23:28:32.109252 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 23:28:32.145510 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 23:28:32.179135 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 23:28:32.236610 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1101 23:28:32.279037 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 23:28:32.321901 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1101 23:28:32.345926 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/files/etc/ssl/certs/106712.pem --> /usr/share/ca-certificates/106712.pem (1708 bytes)
I1101 23:28:32.366726 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 23:28:32.387966 24185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3880/.minikube/certs/10671.pem --> /usr/share/ca-certificates/10671.pem (1338 bytes)
I1101 23:28:32.409925 24185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 23:28:32.425887 24185 ssh_runner.go:195] Run: openssl version
I1101 23:28:32.431318 24185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 23:28:32.441417 24185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 23:28:32.446790 24185 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 1 22:46 /usr/share/ca-certificates/minikubeCA.pem
I1101 23:28:32.446863 24185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 23:28:32.452335 24185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 23:28:32.460355 24185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10671.pem && ln -fs /usr/share/ca-certificates/10671.pem /etc/ssl/certs/10671.pem"
I1101 23:28:32.469467 24185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10671.pem
I1101 23:28:32.473836 24185 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 1 22:53 /usr/share/ca-certificates/10671.pem
I1101 23:28:32.473884 24185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10671.pem
I1101 23:28:32.479353 24185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10671.pem /etc/ssl/certs/51391683.0"
I1101 23:28:32.487747 24185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/106712.pem && ln -fs /usr/share/ca-certificates/106712.pem /etc/ssl/certs/106712.pem"
I1101 23:28:32.496970 24185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/106712.pem
I1101 23:28:32.501358 24185 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 1 22:53 /usr/share/ca-certificates/106712.pem
I1101 23:28:32.501394 24185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/106712.pem
I1101 23:28:32.507194 24185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/106712.pem /etc/ssl/certs/3ec20f2e.0"
I1101 23:28:32.515005 24185 kubeadm.go:396] StartCluster: {Name:test-preload-232607 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15232/minikube-v1.27.0-1666976405-15232-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.6 ClusterName:test-preload-232607 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:28:32.515083 24185 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1101 23:28:32.515122 24185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 23:28:32.546941 24185 cri.go:87] found id: "28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1"
I1101 23:28:32.546969 24185 cri.go:87] found id: "17e1f8f97ddb9676a26389886be3fd26d87af8be4f3373206040df6187d39928"
I1101 23:28:32.546980 24185 cri.go:87] found id: ""
I1101 23:28:32.547019 24185 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1101 23:28:32.566529 24185 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5","pid":3066,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5/rootfs","created":"2022-11-01T23:28:32.238081284Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-ld6lh_01498287-7b3d-4b85-b685-2d3fd1cd8e37","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-ld6lh","io.kubernetes.cri.sand
box-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1","pid":3054,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1/rootfs","created":"2022-11-01T23:28:32.222896731Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"3a2162debe6634bc4db80428f31b69238853648bffc02b18fae464ba7d048e25","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-g92wn","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a2162debe6634bc4db80428f31b69238853648bffc02b18fae464ba7d048e25","pid":2874,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/3a2162debe6634bc4db80428f31b69238853648bffc02b18fae464ba7d048e25","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a2162debe6634bc4db80428f31b69238853648bffc02b18fae464ba7d048e25/rootfs","created":"2022-11-01T23:28:31.275790635Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3a2162debe6634bc4db80428f31b69238853648bffc02b18fae464ba7d048e25","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-g92wn_b85acaf2-0e7d-4f18-a0c6-c5c76604191a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-g92wn","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c68116770748cf60c09f68d17b799165eeefca28d795f8457e41a9e1e84cb559","pid":2881,"status":"running",
"bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c68116770748cf60c09f68d17b799165eeefca28d795f8457e41a9e1e84cb559","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c68116770748cf60c09f68d17b799165eeefca28d795f8457e41a9e1e84cb559/rootfs","created":"2022-11-01T23:28:31.250696079Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"c68116770748cf60c09f68d17b799165eeefca28d795f8457e41a9e1e84cb559","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_801b8703-69d4-42b6-ab86-e6181f6c44ec","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dae245e1a173dbf6c73cf633be55bfc078d376d60bb694936912ba84cc2958c4","pid":2837,"status":"running","bundle
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dae245e1a173dbf6c73cf633be55bfc078d376d60bb694936912ba84cc2958c4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dae245e1a173dbf6c73cf633be55bfc078d376d60bb694936912ba84cc2958c4/rootfs","created":"2022-11-01T23:28:31.12222381Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"dae245e1a173dbf6c73cf633be55bfc078d376d60bb694936912ba84cc2958c4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-232607_24bce14278fb707359b40e7a5cf5c8d5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-232607","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I1101 23:28:32.566661 24185 cri.go:124] list returned 5 containers
I1101 23:28:32.566681 24185 cri.go:127] container: {ID:23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5 Status:running}
I1101 23:28:32.566709 24185 cri.go:129] skipping 23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5 - not in ps
I1101 23:28:32.566721 24185 cri.go:127] container: {ID:28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1 Status:running}
I1101 23:28:32.566730 24185 cri.go:133] skipping {28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1 running}: state = "running", want "paused"
I1101 23:28:32.566744 24185 cri.go:127] container: {ID:3a2162debe6634bc4db80428f31b69238853648bffc02b18fae464ba7d048e25 Status:running}
I1101 23:28:32.566752 24185 cri.go:129] skipping 3a2162debe6634bc4db80428f31b69238853648bffc02b18fae464ba7d048e25 - not in ps
I1101 23:28:32.566763 24185 cri.go:127] container: {ID:c68116770748cf60c09f68d17b799165eeefca28d795f8457e41a9e1e84cb559 Status:running}
I1101 23:28:32.566775 24185 cri.go:129] skipping c68116770748cf60c09f68d17b799165eeefca28d795f8457e41a9e1e84cb559 - not in ps
I1101 23:28:32.566780 24185 cri.go:127] container: {ID:dae245e1a173dbf6c73cf633be55bfc078d376d60bb694936912ba84cc2958c4 Status:running}
I1101 23:28:32.566788 24185 cri.go:129] skipping dae245e1a173dbf6c73cf633be55bfc078d376d60bb694936912ba84cc2958c4 - not in ps
I1101 23:28:32.566834 24185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 23:28:32.579781 24185 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1101 23:28:32.579801 24185 kubeadm.go:627] restartCluster start
I1101 23:28:32.579842 24185 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1101 23:28:32.591781 24185 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1101 23:28:32.592310 24185 kubeconfig.go:92] found "test-preload-232607" server: "https://192.168.39.211:8443"
I1101 23:28:32.592970 24185 kapi.go:59] client config for test-preload-232607: &rest.Config{Host:"https://192.168.39.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 23:28:32.593486 24185 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1101 23:28:32.604151 24185 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I1101 23:28:32.604169 24185 kubeadm.go:1114] stopping kube-system containers ...
I1101 23:28:32.604179 24185 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1101 23:28:32.604228 24185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 23:28:32.644200 24185 cri.go:87] found id: "28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1"
I1101 23:28:32.644228 24185 cri.go:87] found id: "17e1f8f97ddb9676a26389886be3fd26d87af8be4f3373206040df6187d39928"
I1101 23:28:32.644238 24185 cri.go:87] found id: ""
I1101 23:28:32.644243 24185 cri.go:232] Stopping containers: [28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1 17e1f8f97ddb9676a26389886be3fd26d87af8be4f3373206040df6187d39928]
I1101 23:28:32.644293 24185 ssh_runner.go:195] Run: which crictl
I1101 23:28:32.648816 24185 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1 17e1f8f97ddb9676a26389886be3fd26d87af8be4f3373206040df6187d39928
I1101 23:28:32.779527 24185 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1101 23:28:32.822235 24185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:28:32.833824 24185 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Nov 1 23:26 /etc/kubernetes/admin.conf
-rw------- 1 root root 5658 Nov 1 23:26 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Nov 1 23:27 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5602 Nov 1 23:26 /etc/kubernetes/scheduler.conf
I1101 23:28:32.833901 24185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1101 23:28:32.841918 24185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1101 23:28:32.849657 24185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1101 23:28:32.857400 24185 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1101 23:28:32.857442 24185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1101 23:28:32.865771 24185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1101 23:28:32.873417 24185 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1101 23:28:32.873458 24185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1101 23:28:32.881317 24185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 23:28:32.889773 24185 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1101 23:28:32.889793 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:28:33.001517 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:28:33.754252 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:28:34.125450 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:28:34.195179 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:28:34.284691 24185 api_server.go:51] waiting for apiserver process to appear ...
I1101 23:28:34.284781 24185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:28:34.803454 24185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:28:35.303082 24185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:28:35.803619 24185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:28:36.303121 24185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:28:36.803097 24185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:28:36.822385 24185 api_server.go:71] duration metric: took 2.537698144s to wait for apiserver process to appear ...
I1101 23:28:36.822415 24185 api_server.go:87] waiting for apiserver healthz status ...
I1101 23:28:36.822430 24185 api_server.go:252] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
I1101 23:28:36.822885 24185 api_server.go:268] stopped: https://192.168.39.211:8443/healthz: Get "https://192.168.39.211:8443/healthz": dial tcp 192.168.39.211:8443: connect: connection refused
I1101 23:28:37.324005 24185 api_server.go:252] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
I1101 23:28:40.683530 24185 api_server.go:278] https://192.168.39.211:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1101 23:28:40.683557 24185 api_server.go:102] status: https://192.168.39.211:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1101 23:28:40.823794 24185 api_server.go:252] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
I1101 23:28:40.829889 24185 api_server.go:278] https://192.168.39.211:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1101 23:28:40.829919 24185 api_server.go:102] status: https://192.168.39.211:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1101 23:28:41.323469 24185 api_server.go:252] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
I1101 23:28:41.330549 24185 api_server.go:278] https://192.168.39.211:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1101 23:28:41.330573 24185 api_server.go:102] status: https://192.168.39.211:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1101 23:28:41.824035 24185 api_server.go:252] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
I1101 23:28:41.829039 24185 api_server.go:278] https://192.168.39.211:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1101 23:28:41.829061 24185 api_server.go:102] status: https://192.168.39.211:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1101 23:28:42.323756 24185 api_server.go:252] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
I1101 23:28:42.328850 24185 api_server.go:278] https://192.168.39.211:8443/healthz returned 200:
ok
I1101 23:28:42.335851 24185 api_server.go:140] control plane version: v1.24.6
I1101 23:28:42.335872 24185 api_server.go:130] duration metric: took 5.513451119s to wait for apiserver health ...
I1101 23:28:42.335879 24185 cni.go:95] Creating CNI manager for ""
I1101 23:28:42.335887 24185 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I1101 23:28:42.337966 24185 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I1101 23:28:42.339762 24185 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1101 23:28:42.349209 24185 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I1101 23:28:42.367476 24185 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 23:28:42.375519 24185 system_pods.go:59] 7 kube-system pods found
I1101 23:28:42.375545 24185 system_pods.go:61] "coredns-6d4b75cb6d-g92wn" [b85acaf2-0e7d-4f18-a0c6-c5c76604191a] Running
I1101 23:28:42.375549 24185 system_pods.go:61] "etcd-test-preload-232607" [6667207e-5994-4822-8cef-378db9373720] Running
I1101 23:28:42.375554 24185 system_pods.go:61] "kube-apiserver-test-preload-232607" [dac8234b-4617-4119-b4ee-e98e43b24216] Pending
I1101 23:28:42.375558 24185 system_pods.go:61] "kube-controller-manager-test-preload-232607" [904e5d94-81fe-459f-8c94-50b86df1104a] Pending
I1101 23:28:42.375565 24185 system_pods.go:61] "kube-proxy-ld6lh" [01498287-7b3d-4b85-b685-2d3fd1cd8e37] Running
I1101 23:28:42.375569 24185 system_pods.go:61] "kube-scheduler-test-preload-232607" [4a81bddb-78d2-435b-9bb6-885d6f32deef] Pending
I1101 23:28:42.375576 24185 system_pods.go:61] "storage-provisioner" [801b8703-69d4-42b6-ab86-e6181f6c44ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 23:28:42.375581 24185 system_pods.go:74] duration metric: took 8.08562ms to wait for pod list to return data ...
I1101 23:28:42.375589 24185 node_conditions.go:102] verifying NodePressure condition ...
I1101 23:28:42.378340 24185 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 23:28:42.378365 24185 node_conditions.go:123] node cpu capacity is 2
I1101 23:28:42.378373 24185 node_conditions.go:105] duration metric: took 2.78046ms to run NodePressure ...
I1101 23:28:42.378386 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:28:42.609737 24185 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1101 23:28:42.613660 24185 kubeadm.go:778] kubelet initialised
I1101 23:28:42.613681 24185 kubeadm.go:779] duration metric: took 3.922281ms waiting for restarted kubelet to initialise ...
I1101 23:28:42.613687 24185 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:28:42.618212 24185 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-g92wn" in "kube-system" namespace to be "Ready" ...
I1101 23:28:42.622690 24185 pod_ready.go:92] pod "coredns-6d4b75cb6d-g92wn" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:42.622707 24185 pod_ready.go:81] duration metric: took 4.470418ms waiting for pod "coredns-6d4b75cb6d-g92wn" in "kube-system" namespace to be "Ready" ...
I1101 23:28:42.622716 24185 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:42.627834 24185 pod_ready.go:92] pod "etcd-test-preload-232607" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:42.627880 24185 pod_ready.go:81] duration metric: took 5.1299ms waiting for pod "etcd-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:42.627889 24185 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:44.646288 24185 pod_ready.go:102] pod "kube-apiserver-test-preload-232607" in "kube-system" namespace has status "Ready":"False"
I1101 23:28:47.138646 24185 pod_ready.go:92] pod "kube-apiserver-test-preload-232607" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:47.138681 24185 pod_ready.go:81] duration metric: took 4.510784868s waiting for pod "kube-apiserver-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:47.138693 24185 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:49.150933 24185 pod_ready.go:102] pod "kube-controller-manager-test-preload-232607" in "kube-system" namespace has status "Ready":"False"
I1101 23:28:51.651887 24185 pod_ready.go:92] pod "kube-controller-manager-test-preload-232607" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:51.651918 24185 pod_ready.go:81] duration metric: took 4.513216462s waiting for pod "kube-controller-manager-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:51.651928 24185 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ld6lh" in "kube-system" namespace to be "Ready" ...
I1101 23:28:51.657467 24185 pod_ready.go:92] pod "kube-proxy-ld6lh" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:51.657494 24185 pod_ready.go:81] duration metric: took 5.559077ms waiting for pod "kube-proxy-ld6lh" in "kube-system" namespace to be "Ready" ...
I1101 23:28:51.657506 24185 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:53.669976 24185 pod_ready.go:102] pod "kube-scheduler-test-preload-232607" in "kube-system" namespace has status "Ready":"False"
I1101 23:28:55.669071 24185 pod_ready.go:92] pod "kube-scheduler-test-preload-232607" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:55.669100 24185 pod_ready.go:81] duration metric: took 4.011586584s waiting for pod "kube-scheduler-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:55.669113 24185 pod_ready.go:38] duration metric: took 13.055418117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:28:55.669129 24185 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1101 23:28:55.679832 24185 ops.go:34] apiserver oom_adj: -16
I1101 23:28:55.679851 24185 kubeadm.go:631] restartCluster took 23.100044883s
I1101 23:28:55.679860 24185 kubeadm.go:398] StartCluster complete in 23.164858665s
I1101 23:28:55.679877 24185 settings.go:142] acquiring lock: {Name:mk247bf0c0fc06062ad4cfb8cc2c49394513b484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:28:55.680001 24185 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15232-3880/kubeconfig
I1101 23:28:55.680669 24185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3880/kubeconfig: {Name:mk71aa6d2439697955d6fb9e62112ba594689952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:28:55.681278 24185 kapi.go:59] client config for test-preload-232607: &rest.Config{Host:"https://192.168.39.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 23:28:55.683744 24185 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-232607" rescaled to 1
I1101 23:28:55.683799 24185 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1101 23:28:55.686524 24185 out.go:177] * Verifying Kubernetes components...
I1101 23:28:55.683843 24185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1101 23:28:55.683862 24185 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
I1101 23:28:55.684017 24185 config.go:180] Loaded profile config "test-preload-232607": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1101 23:28:55.687993 24185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 23:28:55.688024 24185 addons.go:65] Setting storage-provisioner=true in profile "test-preload-232607"
I1101 23:28:55.688030 24185 addons.go:65] Setting default-storageclass=true in profile "test-preload-232607"
I1101 23:28:55.688045 24185 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-232607"
I1101 23:28:55.688047 24185 addons.go:153] Setting addon storage-provisioner=true in "test-preload-232607"
W1101 23:28:55.688176 24185 addons.go:162] addon storage-provisioner should already be in state true
I1101 23:28:55.688219 24185 host.go:66] Checking if "test-preload-232607" exists ...
I1101 23:28:55.688355 24185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1101 23:28:55.688395 24185 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:28:55.688550 24185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1101 23:28:55.688592 24185 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:28:55.703755 24185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:46619
I1101 23:28:55.704209 24185 main.go:134] libmachine: () Calling .GetVersion
I1101 23:28:55.704826 24185 main.go:134] libmachine: Using API Version 1
I1101 23:28:55.704871 24185 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:28:55.705190 24185 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:28:55.705496 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetState
I1101 23:28:55.707853 24185 kapi.go:59] client config for test-preload-232607: &rest.Config{Host:"https://192.168.39.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/profiles/test-preload-232607/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 23:28:55.707995 24185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:41859
I1101 23:28:55.708332 24185 main.go:134] libmachine: () Calling .GetVersion
I1101 23:28:55.708754 24185 main.go:134] libmachine: Using API Version 1
I1101 23:28:55.708783 24185 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:28:55.709123 24185 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:28:55.709662 24185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1101 23:28:55.709707 24185 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:28:55.714865 24185 addons.go:153] Setting addon default-storageclass=true in "test-preload-232607"
W1101 23:28:55.714889 24185 addons.go:162] addon default-storageclass should already be in state true
I1101 23:28:55.714918 24185 host.go:66] Checking if "test-preload-232607" exists ...
I1101 23:28:55.715285 24185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1101 23:28:55.715330 24185 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:28:55.724740 24185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36893
I1101 23:28:55.725137 24185 main.go:134] libmachine: () Calling .GetVersion
I1101 23:28:55.725596 24185 main.go:134] libmachine: Using API Version 1
I1101 23:28:55.725620 24185 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:28:55.725940 24185 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:28:55.726108 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetState
I1101 23:28:55.727801 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:55.730297 24185 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:28:55.731726 24185 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1101 23:28:55.731741 24185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1101 23:28:55.731757 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:55.733558 24185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33687
I1101 23:28:55.733978 24185 main.go:134] libmachine: () Calling .GetVersion
I1101 23:28:55.734528 24185 main.go:134] libmachine: Using API Version 1
I1101 23:28:55.734557 24185 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:28:55.734942 24185 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:28:55.735145 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:55.735494 24185 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1101 23:28:55.735551 24185 main.go:134] libmachine: Launching plugin server for driver kvm2
I1101 23:28:55.735643 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:55.735675 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:55.735869 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:55.736065 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:55.736228 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:55.736385 24185 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/machines/test-preload-232607/id_rsa Username:docker}
I1101 23:28:55.750266 24185 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44243
I1101 23:28:55.750709 24185 main.go:134] libmachine: () Calling .GetVersion
I1101 23:28:55.751226 24185 main.go:134] libmachine: Using API Version 1
I1101 23:28:55.751249 24185 main.go:134] libmachine: () Calling .SetConfigRaw
I1101 23:28:55.751556 24185 main.go:134] libmachine: () Calling .GetMachineName
I1101 23:28:55.751752 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetState
I1101 23:28:55.753426 24185 main.go:134] libmachine: (test-preload-232607) Calling .DriverName
I1101 23:28:55.753657 24185 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I1101 23:28:55.753675 24185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1101 23:28:55.753691 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHHostname
I1101 23:28:55.756569 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:55.756939 24185 main.go:134] libmachine: (test-preload-232607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ec:4c", ip: ""} in network mk-test-preload-232607: {Iface:virbr1 ExpiryTime:2022-11-02 00:26:22 +0000 UTC Type:0 Mac:52:54:00:1f:ec:4c Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-232607 Clientid:01:52:54:00:1f:ec:4c}
I1101 23:28:55.756972 24185 main.go:134] libmachine: (test-preload-232607) DBG | domain test-preload-232607 has defined IP address 192.168.39.211 and MAC address 52:54:00:1f:ec:4c in network mk-test-preload-232607
I1101 23:28:55.757118 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHPort
I1101 23:28:55.757293 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHKeyPath
I1101 23:28:55.757439 24185 main.go:134] libmachine: (test-preload-232607) Calling .GetSSHUsername
I1101 23:28:55.757620 24185 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15232-3880/.minikube/machines/test-preload-232607/id_rsa Username:docker}
I1101 23:28:55.842739 24185 node_ready.go:35] waiting up to 6m0s for node "test-preload-232607" to be "Ready" ...
I1101 23:28:55.844017 24185 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1101 23:28:55.845446 24185 node_ready.go:49] node "test-preload-232607" has status "Ready":"True"
I1101 23:28:55.845461 24185 node_ready.go:38] duration metric: took 2.697275ms waiting for node "test-preload-232607" to be "Ready" ...
I1101 23:28:55.845469 24185 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:28:55.850037 24185 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-g92wn" in "kube-system" namespace to be "Ready" ...
I1101 23:28:55.855304 24185 pod_ready.go:92] pod "coredns-6d4b75cb6d-g92wn" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:55.855319 24185 pod_ready.go:81] duration metric: took 5.261436ms waiting for pod "coredns-6d4b75cb6d-g92wn" in "kube-system" namespace to be "Ready" ...
I1101 23:28:55.855326 24185 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:55.859737 24185 pod_ready.go:92] pod "etcd-test-preload-232607" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:55.859754 24185 pod_ready.go:81] duration metric: took 4.422472ms waiting for pod "etcd-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:55.859764 24185 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:55.864729 24185 pod_ready.go:92] pod "kube-apiserver-test-preload-232607" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:55.864745 24185 pod_ready.go:81] duration metric: took 4.972743ms waiting for pod "kube-apiserver-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:55.864752 24185 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:55.878220 24185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1101 23:28:55.883794 24185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1101 23:28:56.066889 24185 pod_ready.go:92] pod "kube-controller-manager-test-preload-232607" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:56.066910 24185 pod_ready.go:81] duration metric: took 202.151558ms waiting for pod "kube-controller-manager-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:56.066923 24185 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xs9h8" in "kube-system" namespace to be "Ready" ...
I1101 23:28:56.467439 24185 pod_ready.go:92] pod "kube-proxy-xs9h8" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:56.467467 24185 pod_ready.go:81] duration metric: took 400.535417ms waiting for pod "kube-proxy-xs9h8" in "kube-system" namespace to be "Ready" ...
I1101 23:28:56.467480 24185 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:56.729916 24185 main.go:134] libmachine: Making call to close driver server
I1101 23:28:56.729944 24185 main.go:134] libmachine: (test-preload-232607) Calling .Close
I1101 23:28:56.730308 24185 main.go:134] libmachine: (test-preload-232607) DBG | Closing plugin on server side
I1101 23:28:56.730338 24185 main.go:134] libmachine: Successfully made call to close driver server
I1101 23:28:56.730351 24185 main.go:134] libmachine: Making call to close connection to plugin binary
I1101 23:28:56.730361 24185 main.go:134] libmachine: Making call to close driver server
I1101 23:28:56.730378 24185 main.go:134] libmachine: (test-preload-232607) Calling .Close
I1101 23:28:56.730624 24185 main.go:134] libmachine: Successfully made call to close driver server
I1101 23:28:56.730654 24185 main.go:134] libmachine: Making call to close connection to plugin binary
I1101 23:28:56.730674 24185 main.go:134] libmachine: (test-preload-232607) DBG | Closing plugin on server side
I1101 23:28:56.730680 24185 main.go:134] libmachine: Making call to close driver server
I1101 23:28:56.730696 24185 main.go:134] libmachine: (test-preload-232607) Calling .Close
I1101 23:28:56.730902 24185 main.go:134] libmachine: (test-preload-232607) DBG | Closing plugin on server side
I1101 23:28:56.730955 24185 main.go:134] libmachine: Successfully made call to close driver server
I1101 23:28:56.730979 24185 main.go:134] libmachine: Making call to close connection to plugin binary
I1101 23:28:56.788737 24185 main.go:134] libmachine: Making call to close driver server
I1101 23:28:56.788761 24185 main.go:134] libmachine: (test-preload-232607) Calling .Close
I1101 23:28:56.789065 24185 main.go:134] libmachine: (test-preload-232607) DBG | Closing plugin on server side
I1101 23:28:56.789102 24185 main.go:134] libmachine: Successfully made call to close driver server
I1101 23:28:56.789121 24185 main.go:134] libmachine: Making call to close connection to plugin binary
I1101 23:28:56.789133 24185 main.go:134] libmachine: Making call to close driver server
I1101 23:28:56.789143 24185 main.go:134] libmachine: (test-preload-232607) Calling .Close
I1101 23:28:56.789390 24185 main.go:134] libmachine: Successfully made call to close driver server
I1101 23:28:56.789411 24185 main.go:134] libmachine: Making call to close connection to plugin binary
I1101 23:28:56.792783 24185 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I1101 23:28:56.794373 24185 addons.go:414] enableAddons completed in 1.110516497s
I1101 23:28:56.867130 24185 pod_ready.go:92] pod "kube-scheduler-test-preload-232607" in "kube-system" namespace has status "Ready":"True"
I1101 23:28:56.867150 24185 pod_ready.go:81] duration metric: took 399.663179ms waiting for pod "kube-scheduler-test-preload-232607" in "kube-system" namespace to be "Ready" ...
I1101 23:28:56.867159 24185 pod_ready.go:38] duration metric: took 1.021683325s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:28:56.867171 24185 api_server.go:51] waiting for apiserver process to appear ...
I1101 23:28:56.867208 24185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:28:56.879631 24185 api_server.go:71] duration metric: took 1.195800021s to wait for apiserver process to appear ...
I1101 23:28:56.879658 24185 api_server.go:87] waiting for apiserver healthz status ...
I1101 23:28:56.879666 24185 api_server.go:252] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
I1101 23:28:56.885186 24185 api_server.go:278] https://192.168.39.211:8443/healthz returned 200:
ok
I1101 23:28:56.886191 24185 api_server.go:140] control plane version: v1.24.6
I1101 23:28:56.886216 24185 api_server.go:130] duration metric: took 6.551047ms to wait for apiserver health ...
I1101 23:28:56.886226 24185 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 23:28:57.074475 24185 system_pods.go:59] 7 kube-system pods found
I1101 23:28:57.074505 24185 system_pods.go:61] "coredns-6d4b75cb6d-g92wn" [b85acaf2-0e7d-4f18-a0c6-c5c76604191a] Running
I1101 23:28:57.074510 24185 system_pods.go:61] "etcd-test-preload-232607" [6667207e-5994-4822-8cef-378db9373720] Running
I1101 23:28:57.074514 24185 system_pods.go:61] "kube-apiserver-test-preload-232607" [dac8234b-4617-4119-b4ee-e98e43b24216] Running
I1101 23:28:57.074519 24185 system_pods.go:61] "kube-controller-manager-test-preload-232607" [904e5d94-81fe-459f-8c94-50b86df1104a] Running
I1101 23:28:57.074523 24185 system_pods.go:61] "kube-proxy-xs9h8" [c1f36e00-964b-412d-98ed-819011296362] Running
I1101 23:28:57.074527 24185 system_pods.go:61] "kube-scheduler-test-preload-232607" [4a81bddb-78d2-435b-9bb6-885d6f32deef] Running
I1101 23:28:57.074535 24185 system_pods.go:61] "storage-provisioner" [801b8703-69d4-42b6-ab86-e6181f6c44ec] Running
I1101 23:28:57.074539 24185 system_pods.go:74] duration metric: took 188.308536ms to wait for pod list to return data ...
I1101 23:28:57.074546 24185 default_sa.go:34] waiting for default service account to be created ...
I1101 23:28:57.268747 24185 default_sa.go:45] found service account: "default"
I1101 23:28:57.268771 24185 default_sa.go:55] duration metric: took 194.220424ms for default service account to be created ...
I1101 23:28:57.268778 24185 system_pods.go:116] waiting for k8s-apps to be running ...
I1101 23:28:57.470359 24185 system_pods.go:86] 7 kube-system pods found
I1101 23:28:57.470388 24185 system_pods.go:89] "coredns-6d4b75cb6d-g92wn" [b85acaf2-0e7d-4f18-a0c6-c5c76604191a] Running
I1101 23:28:57.470394 24185 system_pods.go:89] "etcd-test-preload-232607" [6667207e-5994-4822-8cef-378db9373720] Running
I1101 23:28:57.470398 24185 system_pods.go:89] "kube-apiserver-test-preload-232607" [dac8234b-4617-4119-b4ee-e98e43b24216] Running
I1101 23:28:57.470402 24185 system_pods.go:89] "kube-controller-manager-test-preload-232607" [904e5d94-81fe-459f-8c94-50b86df1104a] Running
I1101 23:28:57.470407 24185 system_pods.go:89] "kube-proxy-xs9h8" [c1f36e00-964b-412d-98ed-819011296362] Running
I1101 23:28:57.470411 24185 system_pods.go:89] "kube-scheduler-test-preload-232607" [4a81bddb-78d2-435b-9bb6-885d6f32deef] Running
I1101 23:28:57.470415 24185 system_pods.go:89] "storage-provisioner" [801b8703-69d4-42b6-ab86-e6181f6c44ec] Running
I1101 23:28:57.470420 24185 system_pods.go:126] duration metric: took 201.638107ms to wait for k8s-apps to be running ...
I1101 23:28:57.470426 24185 system_svc.go:44] waiting for kubelet service to be running ....
I1101 23:28:57.470489 24185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 23:28:57.483312 24185 system_svc.go:56] duration metric: took 12.867648ms WaitForService to wait for kubelet.
I1101 23:28:57.483352 24185 kubeadm.go:573] duration metric: took 1.79951137s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1101 23:28:57.483385 24185 node_conditions.go:102] verifying NodePressure condition ...
I1101 23:28:57.667534 24185 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1101 23:28:57.667568 24185 node_conditions.go:123] node cpu capacity is 2
I1101 23:28:57.667583 24185 node_conditions.go:105] duration metric: took 184.192623ms to run NodePressure ...
I1101 23:28:57.667596 24185 start.go:217] waiting for startup goroutines ...
I1101 23:28:57.667883 24185 ssh_runner.go:195] Run: rm -f paused
I1101 23:28:57.711802 24185 start.go:506] kubectl: 1.25.3, cluster: 1.24.6 (minor skew: 1)
I1101 23:28:57.715173 24185 out.go:177] * Done! kubectl is now configured to use "test-preload-232607" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
83b921239bb6f 0bb39497ab33b 3 seconds ago Running kube-proxy 0 c0f57c6eda093
998f599f4c646 6e38f40d628db 15 seconds ago Running storage-provisioner 2 c68116770748c
68b4a7c26791d a4ca41631cc7a 15 seconds ago Running coredns 2 3a2162debe663
f76cd797857dc c786c777a4e1c 22 seconds ago Running kube-scheduler 0 1ea079f382ba8
4c70e810d5b21 aebe758cef4cd 22 seconds ago Running etcd 1 d1a6bcdfbb4bf
72080e459318b c6c20157a4233 23 seconds ago Running kube-controller-manager 0 34d95700cad3c
9f04eb062c6f2 860f263331c95 23 seconds ago Running kube-apiserver 0 337e8d5db9464
28b497e71c1df a4ca41631cc7a 26 seconds ago Exited coredns 1 3a2162debe663
17e1f8f97ddb9 6e38f40d628db 26 seconds ago Exited storage-provisioner 1 c68116770748c
*
* ==> containerd <==
* -- Journal begins at Tue 2022-11-01 23:26:18 UTC, ends at Tue 2022-11-01 23:28:58 UTC. --
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.424403790Z" level=info msg="shim disconnected" id=b97444734e30e1a0d7ce347a663a873247086e853aa33c00dcc38af0bb78b2e9
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.424491625Z" level=warning msg="cleaning up after shim disconnected" id=b97444734e30e1a0d7ce347a663a873247086e853aa33c00dcc38af0bb78b2e9 namespace=k8s.io
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.424505151Z" level=info msg="cleaning up dead shim"
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.437202552Z" level=warning msg="cleanup warnings time=\"2022-11-01T23:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3846 runtime=io.containerd.runc.v2\n"
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.442253184Z" level=info msg="StopContainer for \"b97444734e30e1a0d7ce347a663a873247086e853aa33c00dcc38af0bb78b2e9\" returns successfully"
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.442917433Z" level=info msg="StopPodSandbox for \"23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5\""
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.443067658Z" level=info msg="Container to stop \"b97444734e30e1a0d7ce347a663a873247086e853aa33c00dcc38af0bb78b2e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.497879314Z" level=info msg="shim disconnected" id=23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.498185395Z" level=warning msg="cleaning up after shim disconnected" id=23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5 namespace=k8s.io
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.498329721Z" level=info msg="cleaning up dead shim"
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.514752257Z" level=warning msg="cleanup warnings time=\"2022-11-01T23:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3883 runtime=io.containerd.runc.v2\n"
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.515433101Z" level=info msg="TearDown network for sandbox \"23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5\" successfully"
Nov 01 23:28:53 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:53.515651546Z" level=info msg="StopPodSandbox for \"23e1e26e01dfd43f733eb3f55f512ddbbbca3e3b582073d05465e4897246eba5\" returns successfully"
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.498552569Z" level=info msg="RemoveContainer for \"b97444734e30e1a0d7ce347a663a873247086e853aa33c00dcc38af0bb78b2e9\""
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.506679703Z" level=info msg="RemoveContainer for \"b97444734e30e1a0d7ce347a663a873247086e853aa33c00dcc38af0bb78b2e9\" returns successfully"
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.871577531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xs9h8,Uid:c1f36e00-964b-412d-98ed-819011296362,Namespace:kube-system,Attempt:0,}"
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.895931728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.896094296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.896105685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.896587947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0f57c6eda093fcdc4979d30b604b995e36372df86d500c0d6f8c46f45a0d2d9 pid=3906 runtime=io.containerd.runc.v2
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.958316124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xs9h8,Uid:c1f36e00-964b-412d-98ed-819011296362,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0f57c6eda093fcdc4979d30b604b995e36372df86d500c0d6f8c46f45a0d2d9\""
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.962014022Z" level=info msg="CreateContainer within sandbox \"c0f57c6eda093fcdc4979d30b604b995e36372df86d500c0d6f8c46f45a0d2d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.993079783Z" level=info msg="CreateContainer within sandbox \"c0f57c6eda093fcdc4979d30b604b995e36372df86d500c0d6f8c46f45a0d2d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83b921239bb6fc0bd4204b708ac585eb98cfce224a7347a78f44be83ab644eff\""
Nov 01 23:28:54 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:54.995925894Z" level=info msg="StartContainer for \"83b921239bb6fc0bd4204b708ac585eb98cfce224a7347a78f44be83ab644eff\""
Nov 01 23:28:55 test-preload-232607 containerd[2532]: time="2022-11-01T23:28:55.077068449Z" level=info msg="StartContainer for \"83b921239bb6fc0bd4204b708ac585eb98cfce224a7347a78f44be83ab644eff\" returns successfully"
*
* ==> coredns [28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1] <==
*
*
* ==> coredns [68b4a7c26791d218b4a671c3d61a06caa6daaff959414c2c503fe19d951de9b9] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: test-preload-232607
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-232607
kubernetes.io/os=linux
minikube.k8s.io/commit=65bfd3dc2bf9824cf305579b01895f56b2ba9210
minikube.k8s.io/name=test-preload-232607
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_11_01T23_27_09_0700
minikube.k8s.io/version=v1.27.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 01 Nov 2022 23:27:06 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-232607
AcquireTime: <unset>
RenewTime: Tue, 01 Nov 2022 23:28:50 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 01 Nov 2022 23:28:40 +0000 Tue, 01 Nov 2022 23:27:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 01 Nov 2022 23:28:40 +0000 Tue, 01 Nov 2022 23:27:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 01 Nov 2022 23:28:40 +0000 Tue, 01 Nov 2022 23:27:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 01 Nov 2022 23:28:40 +0000 Tue, 01 Nov 2022 23:27:19 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.211
Hostname: test-preload-232607
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: e782d3c6b55c47ea9cded6e6b3c80aa7
System UUID: e782d3c6-b55c-47ea-9cde-d6e6b3c80aa7
Boot ID: d4fd23c0-7941-4a15-927c-444cb35a9f92
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.8
Kubelet Version: v1.24.6
Kube-Proxy Version: v1.24.6
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-g92wn 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 97s
kube-system etcd-test-preload-232607 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 110s
kube-system kube-apiserver-test-preload-232607 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18s
kube-system kube-controller-manager-test-preload-232607 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18s
kube-system kube-proxy-xs9h8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5s
kube-system kube-scheduler-test-preload-232607 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 94s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 94s kube-proxy
Normal Starting 13s kube-proxy
Normal Starting 3s kube-proxy
Normal NodeHasSufficientMemory 2m (x4 over 2m) kubelet Node test-preload-232607 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 2m (x4 over 2m) kubelet Node test-preload-232607 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 2m (x4 over 2m) kubelet Node test-preload-232607 status is now: NodeHasNoDiskPressure
Normal Starting 110s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 110s kubelet Node test-preload-232607 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 110s kubelet Node test-preload-232607 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 110s kubelet Node test-preload-232607 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 110s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 100s kubelet Node test-preload-232607 status is now: NodeReady
Normal RegisteredNode 97s node-controller Node test-preload-232607 event: Registered Node test-preload-232607 in Controller
Normal NodeHasSufficientMemory 25s (x8 over 25s) kubelet Node test-preload-232607 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 25s (x8 over 25s) kubelet Node test-preload-232607 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 25s (x7 over 25s) kubelet Node test-preload-232607 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 25s kubelet Updated Node Allocatable limit across pods
Normal Starting 25s kubelet Starting kubelet.
Normal RegisteredNode 6s node-controller Node test-preload-232607 event: Registered Node test-preload-232607 in Controller
*
* ==> dmesg <==
* [Nov 1 23:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.072334] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.969221] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.131687] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.129748] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +4.951096] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +8.021092] systemd-fstab-generator[550]: Ignoring "noauto" for root device
[ +0.113330] systemd-fstab-generator[561]: Ignoring "noauto" for root device
[ +0.194922] systemd-fstab-generator[584]: Ignoring "noauto" for root device
[ +27.551338] systemd-fstab-generator[988]: Ignoring "noauto" for root device
[Nov 1 23:27] systemd-fstab-generator[1380]: Ignoring "noauto" for root device
[ +13.797872] kauditd_printk_skb: 7 callbacks suppressed
[ +11.460992] kauditd_printk_skb: 22 callbacks suppressed
[Nov 1 23:28] kauditd_printk_skb: 4 callbacks suppressed
[ +24.218009] systemd-fstab-generator[2435]: Ignoring "noauto" for root device
[ +0.436603] systemd-fstab-generator[2467]: Ignoring "noauto" for root device
[ +0.154228] systemd-fstab-generator[2486]: Ignoring "noauto" for root device
[ +0.233498] systemd-fstab-generator[2522]: Ignoring "noauto" for root device
[ +6.049485] systemd-fstab-generator[3237]: Ignoring "noauto" for root device
*
* ==> etcd [4c70e810d5b2191aaa246a17d6c1c641d7aba5a9357fc68b0a7f8fb7e7f95748] <==
* {"level":"info","ts":"2022-11-01T23:28:37.100Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d3f1da2044f49cdd","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-11-01T23:28:37.108Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-11-01T23:28:37.109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd switched to configuration voters=(15272227643520752861)"}
{"level":"info","ts":"2022-11-01T23:28:37.110Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-11-01T23:28:37.110Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","added-peer-id":"d3f1da2044f49cdd","added-peer-peer-urls":["https://192.168.39.211:2380"]}
{"level":"info","ts":"2022-11-01T23:28:37.114Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-01T23:28:37.114Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-01T23:28:37.115Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d3f1da2044f49cdd","initial-advertise-peer-urls":["https://192.168.39.211:2380"],"listen-peer-urls":["https://192.168.39.211:2380"],"advertise-client-urls":["https://192.168.39.211:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.211:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-11-01T23:28:37.115Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-11-01T23:28:37.110Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.211:2380"}
{"level":"info","ts":"2022-11-01T23:28:37.120Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.211:2380"}
{"level":"info","ts":"2022-11-01T23:28:38.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd is starting a new election at term 2"}
{"level":"info","ts":"2022-11-01T23:28:38.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became pre-candidate at term 2"}
{"level":"info","ts":"2022-11-01T23:28:38.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgPreVoteResp from d3f1da2044f49cdd at term 2"}
{"level":"info","ts":"2022-11-01T23:28:38.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became candidate at term 3"}
{"level":"info","ts":"2022-11-01T23:28:38.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgVoteResp from d3f1da2044f49cdd at term 3"}
{"level":"info","ts":"2022-11-01T23:28:38.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became leader at term 3"}
{"level":"info","ts":"2022-11-01T23:28:38.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3f1da2044f49cdd elected leader d3f1da2044f49cdd at term 3"}
{"level":"info","ts":"2022-11-01T23:28:38.160Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d3f1da2044f49cdd","local-member-attributes":"{Name:test-preload-232607 ClientURLs:[https://192.168.39.211:2379]}","request-path":"/0/members/d3f1da2044f49cdd/attributes","cluster-id":"a3f4522b5c780b58","publish-timeout":"7s"}
{"level":"info","ts":"2022-11-01T23:28:38.160Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-01T23:28:38.162Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.211:2379"}
{"level":"info","ts":"2022-11-01T23:28:38.163Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-01T23:28:38.164Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-11-01T23:28:38.164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-11-01T23:28:38.164Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 23:28:59 up 2 min, 0 users, load average: 1.91, 0.71, 0.26
Linux test-preload-232607 5.10.57 #1 SMP Fri Oct 28 21:02:11 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [9f04eb062c6f28d5c259bc0959d90b6e47b32cf2bb9a849177c56db91a959c93] <==
* I1101 23:28:40.665451 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1101 23:28:40.665532 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1101 23:28:40.665730 1 crd_finalizer.go:266] Starting CRDFinalizer
I1101 23:28:40.670441 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1101 23:28:40.670527 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I1101 23:28:40.725404 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
E1101 23:28:40.742222 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1101 23:28:40.771058 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1101 23:28:40.807934 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1101 23:28:40.816532 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1101 23:28:40.818348 1 cache.go:39] Caches are synced for autoregister controller
I1101 23:28:40.818489 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1101 23:28:40.818780 1 shared_informer.go:262] Caches are synced for node_authorizer
I1101 23:28:40.818889 1 apf_controller.go:322] Running API Priority and Fairness config worker
I1101 23:28:41.264326 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1101 23:28:41.612082 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1101 23:28:42.529522 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I1101 23:28:42.537379 1 controller.go:611] quota admission added evaluator for: deployments.apps
I1101 23:28:42.570679 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I1101 23:28:42.592266 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1101 23:28:42.598897 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1101 23:28:45.502429 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I1101 23:28:53.229341 1 controller.go:611] quota admission added evaluator for: endpoints
I1101 23:28:53.245098 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I1101 23:28:53.309387 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [72080e459318bc31f41f462ccfcf653e481bd42b45bca6cdb5245af57ff4375f] <==
* I1101 23:28:53.276890 1 shared_informer.go:262] Caches are synced for node
I1101 23:28:53.277016 1 range_allocator.go:173] Starting range CIDR allocator
I1101 23:28:53.277028 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I1101 23:28:53.277037 1 shared_informer.go:262] Caches are synced for cidrallocator
I1101 23:28:53.277781 1 shared_informer.go:262] Caches are synced for endpoint_slice
I1101 23:28:53.283086 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I1101 23:28:53.285335 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I1101 23:28:53.289046 1 shared_informer.go:262] Caches are synced for stateful set
I1101 23:28:53.289220 1 shared_informer.go:262] Caches are synced for deployment
I1101 23:28:53.289665 1 shared_informer.go:262] Caches are synced for attach detach
I1101 23:28:53.293853 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I1101 23:28:53.411627 1 shared_informer.go:262] Caches are synced for taint
I1101 23:28:53.412240 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I1101 23:28:53.414842 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
W1101 23:28:53.415570 1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-232607. Assuming now as a timestamp.
I1101 23:28:53.416018 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I1101 23:28:53.415422 1 event.go:294] "Event occurred" object="test-preload-232607" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-232607 event: Registered Node test-preload-232607 in Controller"
I1101 23:28:53.438149 1 shared_informer.go:262] Caches are synced for disruption
I1101 23:28:53.438214 1 disruption.go:371] Sending events to api server.
I1101 23:28:53.449568 1 shared_informer.go:262] Caches are synced for resource quota
I1101 23:28:53.461145 1 shared_informer.go:262] Caches are synced for resource quota
I1101 23:28:53.906597 1 shared_informer.go:262] Caches are synced for garbage collector
I1101 23:28:53.929381 1 shared_informer.go:262] Caches are synced for garbage collector
I1101 23:28:53.929437 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1101 23:28:54.552192 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xs9h8"
*
* ==> kube-proxy [83b921239bb6fc0bd4204b708ac585eb98cfce224a7347a78f44be83ab644eff] <==
* I1101 23:28:55.188334 1 node.go:163] Successfully retrieved node IP: 192.168.39.211
I1101 23:28:55.188453 1 server_others.go:138] "Detected node IP" address="192.168.39.211"
I1101 23:28:55.188550 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1101 23:28:55.275291 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I1101 23:28:55.275357 1 server_others.go:206] "Using iptables Proxier"
I1101 23:28:55.275424 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1101 23:28:55.276470 1 server.go:661] "Version info" version="v1.24.6"
I1101 23:28:55.276608 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 23:28:55.278434 1 config.go:317] "Starting service config controller"
I1101 23:28:55.278839 1 shared_informer.go:255] Waiting for caches to sync for service config
I1101 23:28:55.279111 1 config.go:226] "Starting endpoint slice config controller"
I1101 23:28:55.279143 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1101 23:28:55.279450 1 config.go:444] "Starting node config controller"
I1101 23:28:55.279607 1 shared_informer.go:255] Waiting for caches to sync for node config
I1101 23:28:55.379232 1 shared_informer.go:262] Caches are synced for service config
I1101 23:28:55.380262 1 shared_informer.go:262] Caches are synced for node config
I1101 23:28:55.380361 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [f76cd797857dc35db419cac774970119de5017ec8d67a0975e86efdd7921862b] <==
* I1101 23:28:37.714286 1 serving.go:348] Generated self-signed cert in-memory
W1101 23:28:40.691431 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1101 23:28:40.691571 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1101 23:28:40.691712 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1101 23:28:40.691746 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1101 23:28:40.748274 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.6"
I1101 23:28:40.748320 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 23:28:40.750931 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1101 23:28:40.752311 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1101 23:28:40.752332 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1101 23:28:40.752354 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1101 23:28:40.852774 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Tue 2022-11-01 23:26:18 UTC, ends at Tue 2022-11-01 23:28:59 UTC. --
Nov 01 23:28:42 test-preload-232607 kubelet[3243]: I1101 23:28:42.782884 3243 scope.go:110] "RemoveContainer" containerID="28b497e71c1dfffefc215012e8216f990bef8b73d7ad17d54134f55683dc89b1"
Nov 01 23:28:44 test-preload-232607 kubelet[3243]: I1101 23:28:44.385023 3243 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=24bce14278fb707359b40e7a5cf5c8d5 path="/var/lib/kubelet/pods/24bce14278fb707359b40e7a5cf5c8d5/volumes"
Nov 01 23:28:51 test-preload-232607 kubelet[3243]: I1101 23:28:51.532191 3243 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.631454 3243 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01498287-7b3d-4b85-b685-2d3fd1cd8e37-kube-proxy\") pod \"01498287-7b3d-4b85-b685-2d3fd1cd8e37\" (UID: \"01498287-7b3d-4b85-b685-2d3fd1cd8e37\") "
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.631532 3243 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmbcj\" (UniqueName: \"kubernetes.io/projected/01498287-7b3d-4b85-b685-2d3fd1cd8e37-kube-api-access-cmbcj\") pod \"01498287-7b3d-4b85-b685-2d3fd1cd8e37\" (UID: \"01498287-7b3d-4b85-b685-2d3fd1cd8e37\") "
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.631555 3243 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01498287-7b3d-4b85-b685-2d3fd1cd8e37-xtables-lock\") pod \"01498287-7b3d-4b85-b685-2d3fd1cd8e37\" (UID: \"01498287-7b3d-4b85-b685-2d3fd1cd8e37\") "
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.631576 3243 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01498287-7b3d-4b85-b685-2d3fd1cd8e37-lib-modules\") pod \"01498287-7b3d-4b85-b685-2d3fd1cd8e37\" (UID: \"01498287-7b3d-4b85-b685-2d3fd1cd8e37\") "
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.631622 3243 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01498287-7b3d-4b85-b685-2d3fd1cd8e37-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01498287-7b3d-4b85-b685-2d3fd1cd8e37" (UID: "01498287-7b3d-4b85-b685-2d3fd1cd8e37"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: W1101 23:28:53.633039 3243 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/01498287-7b3d-4b85-b685-2d3fd1cd8e37/volumes/kubernetes.io~configmap/kube-proxy: clearQuota called, but quotas disabled
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.633419 3243 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01498287-7b3d-4b85-b685-2d3fd1cd8e37-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "01498287-7b3d-4b85-b685-2d3fd1cd8e37" (UID: "01498287-7b3d-4b85-b685-2d3fd1cd8e37"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.633471 3243 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01498287-7b3d-4b85-b685-2d3fd1cd8e37-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01498287-7b3d-4b85-b685-2d3fd1cd8e37" (UID: "01498287-7b3d-4b85-b685-2d3fd1cd8e37"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.637403 3243 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01498287-7b3d-4b85-b685-2d3fd1cd8e37-kube-api-access-cmbcj" (OuterVolumeSpecName: "kube-api-access-cmbcj") pod "01498287-7b3d-4b85-b685-2d3fd1cd8e37" (UID: "01498287-7b3d-4b85-b685-2d3fd1cd8e37"). InnerVolumeSpecName "kube-api-access-cmbcj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.732691 3243 reconciler.go:384] "Volume detached for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01498287-7b3d-4b85-b685-2d3fd1cd8e37-kube-proxy\") on node \"test-preload-232607\" DevicePath \"\""
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.732783 3243 reconciler.go:384] "Volume detached for volume \"kube-api-access-cmbcj\" (UniqueName: \"kubernetes.io/projected/01498287-7b3d-4b85-b685-2d3fd1cd8e37-kube-api-access-cmbcj\") on node \"test-preload-232607\" DevicePath \"\""
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.732794 3243 reconciler.go:384] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01498287-7b3d-4b85-b685-2d3fd1cd8e37-xtables-lock\") on node \"test-preload-232607\" DevicePath \"\""
Nov 01 23:28:53 test-preload-232607 kubelet[3243]: I1101 23:28:53.732806 3243 reconciler.go:384] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01498287-7b3d-4b85-b685-2d3fd1cd8e37-lib-modules\") on node \"test-preload-232607\" DevicePath \"\""
Nov 01 23:28:54 test-preload-232607 kubelet[3243]: I1101 23:28:54.494433 3243 scope.go:110] "RemoveContainer" containerID="b97444734e30e1a0d7ce347a663a873247086e853aa33c00dcc38af0bb78b2e9"
Nov 01 23:28:54 test-preload-232607 kubelet[3243]: I1101 23:28:54.560059 3243 topology_manager.go:200] "Topology Admit Handler"
Nov 01 23:28:54 test-preload-232607 kubelet[3243]: E1101 23:28:54.560138 3243 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="01498287-7b3d-4b85-b685-2d3fd1cd8e37" containerName="kube-proxy"
Nov 01 23:28:54 test-preload-232607 kubelet[3243]: I1101 23:28:54.560171 3243 memory_manager.go:345] "RemoveStaleState removing state" podUID="01498287-7b3d-4b85-b685-2d3fd1cd8e37" containerName="kube-proxy"
Nov 01 23:28:54 test-preload-232607 kubelet[3243]: I1101 23:28:54.638501 3243 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f36e00-964b-412d-98ed-819011296362-xtables-lock\") pod \"kube-proxy-xs9h8\" (UID: \"c1f36e00-964b-412d-98ed-819011296362\") " pod="kube-system/kube-proxy-xs9h8"
Nov 01 23:28:54 test-preload-232607 kubelet[3243]: I1101 23:28:54.638578 3243 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dbjz\" (UniqueName: \"kubernetes.io/projected/c1f36e00-964b-412d-98ed-819011296362-kube-api-access-7dbjz\") pod \"kube-proxy-xs9h8\" (UID: \"c1f36e00-964b-412d-98ed-819011296362\") " pod="kube-system/kube-proxy-xs9h8"
Nov 01 23:28:54 test-preload-232607 kubelet[3243]: I1101 23:28:54.638608 3243 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1f36e00-964b-412d-98ed-819011296362-kube-proxy\") pod \"kube-proxy-xs9h8\" (UID: \"c1f36e00-964b-412d-98ed-819011296362\") " pod="kube-system/kube-proxy-xs9h8"
Nov 01 23:28:54 test-preload-232607 kubelet[3243]: I1101 23:28:54.638626 3243 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f36e00-964b-412d-98ed-819011296362-lib-modules\") pod \"kube-proxy-xs9h8\" (UID: \"c1f36e00-964b-412d-98ed-819011296362\") " pod="kube-system/kube-proxy-xs9h8"
Nov 01 23:28:56 test-preload-232607 kubelet[3243]: I1101 23:28:56.384160 3243 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=01498287-7b3d-4b85-b685-2d3fd1cd8e37 path="/var/lib/kubelet/pods/01498287-7b3d-4b85-b685-2d3fd1cd8e37/volumes"
*
* ==> storage-provisioner [17e1f8f97ddb9676a26389886be3fd26d87af8be4f3373206040df6187d39928] <==
* I1101 23:28:32.085893 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1101 23:28:32.090590 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> storage-provisioner [998f599f4c646f81d4a3b57b13ffbe68ac9c4381ecaf41de4d19234ff72c42ac] <==
* I1101 23:28:43.146700 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1101 23:28:43.157423 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1101 23:28:43.157782 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-232607 -n test-preload-232607
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-232607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context test-preload-232607 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context test-preload-232607 describe pod : exit status 1 (46.523261ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context test-preload-232607 describe pod : exit status 1
helpers_test.go:175: Cleaning up "test-preload-232607" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-232607
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-232607: (1.160602368s)
--- FAIL: TestPreload (173.71s)