=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-105443 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0114 10:56:36.137069 13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105443 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (1m59.260415559s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-105443 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-105443 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.75707615s)
preload_test.go:67: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-105443 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.6
E0114 10:57:31.430526 13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105443 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.6: (1m8.313546924s)
preload_test.go:76: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-105443 -- sudo crictl image ls
preload_test.go:81: Expected to find gcr.io/k8s-minikube/busybox in output of `docker images`, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.6 860f263331c95 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.6 c6c20157a4233 31MB
k8s.gcr.io/kube-proxy v1.24.6 0bb39497ab33b 39.5MB
k8s.gcr.io/kube-scheduler v1.24.6 c786c777a4e1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-01-14 10:57:53.338544316 +0000 UTC m=+3108.203844445
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-105443 -n test-preload-105443
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-105443 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-105443 logs -n 25: (1.054372883s)
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| cp | multinode-103159 cp multinode-103159-m03:/home/docker/cp-test.txt | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
| | multinode-103159:/home/docker/cp-test_multinode-103159-m03_multinode-103159.txt | | | | | |
| ssh | multinode-103159 ssh -n | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
| | multinode-103159-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-103159 ssh -n multinode-103159 sudo cat | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
| | /home/docker/cp-test_multinode-103159-m03_multinode-103159.txt | | | | | |
| cp | multinode-103159 cp multinode-103159-m03:/home/docker/cp-test.txt | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
| | multinode-103159-m02:/home/docker/cp-test_multinode-103159-m03_multinode-103159-m02.txt | | | | | |
| ssh | multinode-103159 ssh -n | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
| | multinode-103159-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-103159 ssh -n multinode-103159-m02 sudo cat | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
| | /home/docker/cp-test_multinode-103159-m03_multinode-103159-m02.txt | | | | | |
| node | multinode-103159 node stop m03 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
| node | multinode-103159 node start | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:37 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-103159 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:37 UTC | |
| stop | -p multinode-103159 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:37 UTC | 14 Jan 23 10:40 UTC |
| start | -p multinode-103159 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:40 UTC | 14 Jan 23 10:46 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-103159 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:46 UTC | |
| node | multinode-103159 node delete | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:46 UTC | 14 Jan 23 10:46 UTC |
| | m03 | | | | | |
| stop | multinode-103159 stop | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:46 UTC | 14 Jan 23 10:49 UTC |
| start | -p multinode-103159 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:49 UTC | 14 Jan 23 10:53 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-103159 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:53 UTC | |
| start | -p multinode-103159-m02 | multinode-103159-m02 | jenkins | v1.28.0 | 14 Jan 23 10:53 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-103159-m03 | multinode-103159-m03 | jenkins | v1.28.0 | 14 Jan 23 10:53 UTC | 14 Jan 23 10:54 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-103159 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:54 UTC | |
| delete | -p multinode-103159-m03 | multinode-103159-m03 | jenkins | v1.28.0 | 14 Jan 23 10:54 UTC | 14 Jan 23 10:54 UTC |
| delete | -p multinode-103159 | multinode-103159 | jenkins | v1.28.0 | 14 Jan 23 10:54 UTC | 14 Jan 23 10:54 UTC |
| start | -p test-preload-105443 | test-preload-105443 | jenkins | v1.28.0 | 14 Jan 23 10:54 UTC | 14 Jan 23 10:56 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-105443 | test-preload-105443 | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| start | -p test-preload-105443 | test-preload-105443 | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:57 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.6 | | | | | |
| ssh | -p test-preload-105443 -- sudo | test-preload-105443 | jenkins | v1.28.0 | 14 Jan 23 10:57 UTC | 14 Jan 23 10:57 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/14 10:56:44
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.19.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0114 10:56:44.841829 27483 out.go:296] Setting OutFile to fd 1 ...
I0114 10:56:44.841990 27483 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:56:44.842004 27483 out.go:309] Setting ErrFile to fd 2...
I0114 10:56:44.842011 27483 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:56:44.842150 27483 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
I0114 10:56:44.842714 27483 out.go:303] Setting JSON to false
I0114 10:56:44.843584 27483 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5952,"bootTime":1673687853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0114 10:56:44.843643 27483 start.go:135] virtualization: kvm guest
I0114 10:56:44.845995 27483 out.go:177] * [test-preload-105443] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I0114 10:56:44.847505 27483 out.go:177] - MINIKUBE_LOCATION=15642
I0114 10:56:44.847467 27483 notify.go:220] Checking for updates...
I0114 10:56:44.849131 27483 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 10:56:44.850641 27483 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
I0114 10:56:44.852159 27483 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
I0114 10:56:44.853836 27483 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0114 10:56:44.855568 27483 config.go:180] Loaded profile config "test-preload-105443": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0114 10:56:44.855925 27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0114 10:56:44.855969 27483 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:56:44.871006 27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36411
I0114 10:56:44.871406 27483 main.go:134] libmachine: () Calling .GetVersion
I0114 10:56:44.871928 27483 main.go:134] libmachine: Using API Version 1
I0114 10:56:44.871948 27483 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:56:44.872276 27483 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:56:44.872460 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:56:44.874302 27483 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I0114 10:56:44.875951 27483 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 10:56:44.876417 27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0114 10:56:44.876461 27483 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:56:44.891718 27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39941
I0114 10:56:44.892038 27483 main.go:134] libmachine: () Calling .GetVersion
I0114 10:56:44.892525 27483 main.go:134] libmachine: Using API Version 1
I0114 10:56:44.892546 27483 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:56:44.892855 27483 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:56:44.893034 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:56:44.928948 27483 out.go:177] * Using the kvm2 driver based on existing profile
I0114 10:56:44.930397 27483 start.go:294] selected driver: kvm2
I0114 10:56:44.930425 27483 start.go:838] validating driver "kvm2" against &{Name:test-preload-105443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-105443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:56:44.930568 27483 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 10:56:44.931465 27483 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 10:56:44.931693 27483 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15642-7076/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0114 10:56:44.947187 27483 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.28.0
I0114 10:56:44.947505 27483 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0114 10:56:44.947531 27483 cni.go:95] Creating CNI manager for ""
I0114 10:56:44.947541 27483 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I0114 10:56:44.947551 27483 start_flags.go:319] config:
{Name:test-preload-105443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-105443 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:56:44.947647 27483 iso.go:125] acquiring lock: {Name:mk2d30b3fe95e944ec3a455ef50a6daa83b559c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 10:56:44.949787 27483 out.go:177] * Starting control plane node test-preload-105443 in cluster test-preload-105443
I0114 10:56:44.951384 27483 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0114 10:56:45.066501 27483 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I0114 10:56:45.066524 27483 cache.go:57] Caching tarball of preloaded images
I0114 10:56:45.066747 27483 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0114 10:56:45.069122 27483 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I0114 10:56:45.070627 27483 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0114 10:56:45.187669 27483 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I0114 10:57:02.624024 27483 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0114 10:57:02.624110 27483 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0114 10:57:03.493487 27483 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I0114 10:57:03.493622 27483 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/config.json ...
I0114 10:57:03.493815 27483 cache.go:193] Successfully downloaded all kic artifacts
I0114 10:57:03.493843 27483 start.go:364] acquiring machines lock for test-preload-105443: {Name:mk0b2fd58874b04199a2e55d480667572854a1a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0114 10:57:03.493937 27483 start.go:368] acquired machines lock for "test-preload-105443" in 77.451µs
I0114 10:57:03.493953 27483 start.go:96] Skipping create...Using existing machine configuration
I0114 10:57:03.493958 27483 fix.go:55] fixHost starting:
I0114 10:57:03.494229 27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0114 10:57:03.494268 27483 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:57:03.509103 27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34621
I0114 10:57:03.509503 27483 main.go:134] libmachine: () Calling .GetVersion
I0114 10:57:03.509956 27483 main.go:134] libmachine: Using API Version 1
I0114 10:57:03.509972 27483 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:57:03.510346 27483 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:57:03.510559 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:03.510711 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetState
I0114 10:57:03.512608 27483 fix.go:103] recreateIfNeeded on test-preload-105443: state=Running err=<nil>
W0114 10:57:03.512626 27483 fix.go:129] unexpected machine state, will restart: <nil>
I0114 10:57:03.515899 27483 out.go:177] * Updating the running kvm2 "test-preload-105443" VM ...
I0114 10:57:03.517259 27483 machine.go:88] provisioning docker machine ...
I0114 10:57:03.517287 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:03.517498 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetMachineName
I0114 10:57:03.517653 27483 buildroot.go:166] provisioning hostname "test-preload-105443"
I0114 10:57:03.517679 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetMachineName
I0114 10:57:03.517877 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:03.520528 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.520966 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:03.521003 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.521153 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:03.521324 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:03.521464 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:03.521597 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:03.521755 27483 main.go:134] libmachine: Using SSH client type: native
I0114 10:57:03.521903 27483 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 192.168.39.172 22 <nil> <nil>}
I0114 10:57:03.521917 27483 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-105443 && echo "test-preload-105443" | sudo tee /etc/hostname
I0114 10:57:03.657055 27483 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-105443
I0114 10:57:03.657083 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:03.659898 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.660230 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:03.660260 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.660430 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:03.660618 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:03.660766 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:03.660889 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:03.661034 27483 main.go:134] libmachine: Using SSH client type: native
I0114 10:57:03.661189 27483 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 192.168.39.172 22 <nil> <nil>}
I0114 10:57:03.661209 27483 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-105443' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-105443/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-105443' | sudo tee -a /etc/hosts;
fi
fi
I0114 10:57:03.779087 27483 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0114 10:57:03.779115 27483 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15642-7076/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-7076/.minikube}
I0114 10:57:03.779137 27483 buildroot.go:174] setting up certificates
I0114 10:57:03.779146 27483 provision.go:83] configureAuth start
I0114 10:57:03.779160 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetMachineName
I0114 10:57:03.779387 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetIP
I0114 10:57:03.781939 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.782288 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:03.782316 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.782430 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:03.784455 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.784750 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:03.784786 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.784881 27483 provision.go:138] copyHostCerts
I0114 10:57:03.784922 27483 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem, removing ...
I0114 10:57:03.784932 27483 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem
I0114 10:57:03.785006 27483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem (1078 bytes)
I0114 10:57:03.785109 27483 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem, removing ...
I0114 10:57:03.785120 27483 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem
I0114 10:57:03.785147 27483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem (1123 bytes)
I0114 10:57:03.785195 27483 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem, removing ...
I0114 10:57:03.785202 27483 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem
I0114 10:57:03.785224 27483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem (1679 bytes)
I0114 10:57:03.785270 27483 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca-key.pem org=jenkins.test-preload-105443 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube test-preload-105443]
I0114 10:57:03.904735 27483 provision.go:172] copyRemoteCerts
I0114 10:57:03.904794 27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0114 10:57:03.904814 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:03.907354 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.907664 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:03.907706 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:03.907872 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:03.908036 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:03.908221 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:03.908378 27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
I0114 10:57:03.996081 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0114 10:57:04.020384 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0114 10:57:04.042430 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0114 10:57:04.064424 27483 provision.go:86] duration metric: configureAuth took 285.2617ms
I0114 10:57:04.064452 27483 buildroot.go:189] setting minikube options for container-runtime
I0114 10:57:04.064606 27483 config.go:180] Loaded profile config "test-preload-105443": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I0114 10:57:04.064617 27483 machine.go:91] provisioned docker machine in 547.340706ms
I0114 10:57:04.064622 27483 start.go:300] post-start starting for "test-preload-105443" (driver="kvm2")
I0114 10:57:04.064628 27483 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0114 10:57:04.064653 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:04.064923 27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0114 10:57:04.064952 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:04.067356 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.067669 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:04.067705 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.067874 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:04.068068 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:04.068195 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:04.068353 27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
I0114 10:57:04.155889 27483 ssh_runner.go:195] Run: cat /etc/os-release
I0114 10:57:04.160000 27483 info.go:137] Remote host: Buildroot 2021.02.12
I0114 10:57:04.160026 27483 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-7076/.minikube/addons for local assets ...
I0114 10:57:04.160107 27483 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-7076/.minikube/files for local assets ...
I0114 10:57:04.160194 27483 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem -> 139212.pem in /etc/ssl/certs
I0114 10:57:04.160304 27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0114 10:57:04.169302 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem --> /etc/ssl/certs/139212.pem (1708 bytes)
I0114 10:57:04.191837 27483 start.go:303] post-start completed in 127.20128ms
I0114 10:57:04.191871 27483 fix.go:57] fixHost completed within 697.911934ms
I0114 10:57:04.191896 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:04.194381 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.194670 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:04.194703 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.194903 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:04.195079 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:04.195212 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:04.195378 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:04.195505 27483 main.go:134] libmachine: Using SSH client type: native
I0114 10:57:04.195622 27483 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 192.168.39.172 22 <nil> <nil>}
I0114 10:57:04.195632 27483 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0114 10:57:04.314929 27483 main.go:134] libmachine: SSH cmd err, output: <nil>: 1673693824.311811677
I0114 10:57:04.314953 27483 fix.go:207] guest clock: 1673693824.311811677
I0114 10:57:04.314960 27483 fix.go:220] Guest: 2023-01-14 10:57:04.311811677 +0000 UTC Remote: 2023-01-14 10:57:04.191876949 +0000 UTC m=+19.411693954 (delta=119.934728ms)
I0114 10:57:04.314981 27483 fix.go:191] guest clock delta is within tolerance: 119.934728ms
I0114 10:57:04.314987 27483 start.go:83] releasing machines lock for "test-preload-105443", held for 821.037649ms
I0114 10:57:04.315032 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:04.315315 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetIP
I0114 10:57:04.317727 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.318095 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:04.318138 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.318274 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:04.318776 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:04.318952 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:04.319018 27483 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0114 10:57:04.319066 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:04.319167 27483 ssh_runner.go:195] Run: cat /version.json
I0114 10:57:04.319188 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:04.321686 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.321717 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.321990 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:04.322028 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:04.322048 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.322101 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:04.322310 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:04.322402 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:04.322502 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:04.322555 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:04.322615 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:04.322706 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:04.322727 27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
I0114 10:57:04.322814 27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
I0114 10:57:04.417331 27483 ssh_runner.go:195] Run: systemctl --version
I0114 10:57:04.424326 27483 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0114 10:57:04.424440 27483 ssh_runner.go:195] Run: sudo crictl images --output json
I0114 10:57:04.454669 27483 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I0114 10:57:04.454724 27483 ssh_runner.go:195] Run: which lz4
I0114 10:57:04.459037 27483 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0114 10:57:04.463259 27483 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0114 10:57:04.463289 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I0114 10:57:06.662983 27483 containerd.go:496] Took 2.203974 seconds to copy over tarball
I0114 10:57:06.663050 27483 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0114 10:57:10.021006 27483 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.35792013s)
I0114 10:57:10.021040 27483 containerd.go:503] Took 3.358030 seconds t extract the tarball
I0114 10:57:10.021054 27483 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0114 10:57:10.063775 27483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0114 10:57:10.198644 27483 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0114 10:57:10.235539 27483 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0114 10:57:10.253591 27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0114 10:57:10.266454 27483 docker.go:189] disabling docker service ...
I0114 10:57:10.266504 27483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0114 10:57:10.282083 27483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0114 10:57:10.297881 27483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0114 10:57:10.440617 27483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0114 10:57:10.602422 27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0114 10:57:10.618291 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0114 10:57:10.636619 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I0114 10:57:10.648420 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I0114 10:57:10.659142 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I0114 10:57:10.669259 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
I0114 10:57:10.679887 27483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0114 10:57:10.689402 27483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0114 10:57:10.699119 27483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0114 10:57:10.833459 27483 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0114 10:57:11.085684 27483 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I0114 10:57:11.085757 27483 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0114 10:57:11.109498 27483 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0114 10:57:12.215242 27483 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0114 10:57:12.220461 27483 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0114 10:57:14.382209 27483 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0114 10:57:14.387459 27483 start.go:472] Will wait 60s for crictl version
I0114 10:57:14.387510 27483 ssh_runner.go:195] Run: which crictl
I0114 10:57:14.391151 27483 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0114 10:57:14.420438 27483 start.go:488] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.9
RuntimeApiVersion: v1alpha2
I0114 10:57:14.420496 27483 ssh_runner.go:195] Run: containerd --version
I0114 10:57:14.452838 27483 ssh_runner.go:195] Run: containerd --version
I0114 10:57:14.483693 27483 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
I0114 10:57:14.485043 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetIP
I0114 10:57:14.487862 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:14.488196 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:14.488228 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:14.488412 27483 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0114 10:57:14.492727 27483 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0114 10:57:14.492793 27483 ssh_runner.go:195] Run: sudo crictl images --output json
I0114 10:57:14.521168 27483 containerd.go:553] all images are preloaded for containerd runtime.
I0114 10:57:14.521193 27483 containerd.go:467] Images already preloaded, skipping extraction
I0114 10:57:14.521240 27483 ssh_runner.go:195] Run: sudo crictl images --output json
I0114 10:57:14.550424 27483 containerd.go:553] all images are preloaded for containerd runtime.
I0114 10:57:14.550449 27483 cache_images.go:84] Images are preloaded, skipping loading
I0114 10:57:14.550501 27483 ssh_runner.go:195] Run: sudo crictl info
I0114 10:57:14.604746 27483 cni.go:95] Creating CNI manager for ""
I0114 10:57:14.604769 27483 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I0114 10:57:14.604779 27483 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0114 10:57:14.604798 27483 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-105443 NodeName:test-preload-105443 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0114 10:57:14.604946 27483 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.172
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-105443"
kubeletExtraArgs:
node-ip: 192.168.39.172
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0114 10:57:14.605047 27483 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-105443 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-105443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0114 10:57:14.605108 27483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I0114 10:57:14.617185 27483 binaries.go:44] Found k8s binaries, skipping transfer
I0114 10:57:14.617251 27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0114 10:57:14.628477 27483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (514 bytes)
I0114 10:57:14.650332 27483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0114 10:57:14.676514 27483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
I0114 10:57:14.705028 27483 ssh_runner.go:195] Run: grep 192.168.39.172 control-plane.minikube.internal$ /etc/hosts
I0114 10:57:14.717550 27483 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443 for IP: 192.168.39.172
I0114 10:57:14.717670 27483 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-7076/.minikube/ca.key
I0114 10:57:14.717722 27483 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-7076/.minikube/proxy-client-ca.key
I0114 10:57:14.717812 27483 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.key
I0114 10:57:14.717902 27483 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/apiserver.key.ee96354a
I0114 10:57:14.717961 27483 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/proxy-client.key
I0114 10:57:14.718097 27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/13921.pem (1338 bytes)
W0114 10:57:14.718130 27483 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/13921_empty.pem, impossibly tiny 0 bytes
I0114 10:57:14.718143 27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca-key.pem (1675 bytes)
I0114 10:57:14.718177 27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem (1078 bytes)
I0114 10:57:14.718210 27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/cert.pem (1123 bytes)
I0114 10:57:14.718236 27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/key.pem (1679 bytes)
I0114 10:57:14.718287 27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem (1708 bytes)
I0114 10:57:14.718980 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0114 10:57:14.772325 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0114 10:57:14.805451 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0114 10:57:14.836856 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0114 10:57:14.870023 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0114 10:57:14.923667 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0114 10:57:14.954579 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0114 10:57:14.981542 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0114 10:57:15.019906 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem --> /usr/share/ca-certificates/139212.pem (1708 bytes)
I0114 10:57:15.045803 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0114 10:57:15.082706 27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/certs/13921.pem --> /usr/share/ca-certificates/13921.pem (1338 bytes)
I0114 10:57:15.128950 27483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0114 10:57:15.169006 27483 ssh_runner.go:195] Run: openssl version
I0114 10:57:15.175581 27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139212.pem && ln -fs /usr/share/ca-certificates/139212.pem /etc/ssl/certs/139212.pem"
I0114 10:57:15.189393 27483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139212.pem
I0114 10:57:15.208365 27483 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:21 /usr/share/ca-certificates/139212.pem
I0114 10:57:15.208434 27483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139212.pem
I0114 10:57:15.216455 27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139212.pem /etc/ssl/certs/3ec20f2e.0"
I0114 10:57:15.227070 27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0114 10:57:15.258830 27483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0114 10:57:15.270595 27483 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
I0114 10:57:15.270650 27483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0114 10:57:15.279993 27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0114 10:57:15.289388 27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13921.pem && ln -fs /usr/share/ca-certificates/13921.pem /etc/ssl/certs/13921.pem"
I0114 10:57:15.300388 27483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13921.pem
I0114 10:57:15.305102 27483 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:21 /usr/share/ca-certificates/13921.pem
I0114 10:57:15.305147 27483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13921.pem
I0114 10:57:15.319407 27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13921.pem /etc/ssl/certs/51391683.0"
I0114 10:57:15.344944 27483 kubeadm.go:396] StartCluster: {Name:test-preload-105443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.6 ClusterName:test-preload-105443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:57:15.345031 27483 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0114 10:57:15.345068 27483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0114 10:57:15.396831 27483 cri.go:87] found id: "93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29"
I0114 10:57:15.396859 27483 cri.go:87] found id: ""
I0114 10:57:15.396895 27483 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0114 10:57:15.447928 27483 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0","pid":2921,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0/rootfs","created":"2023-01-14T10:57:14.735337535Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-105443_72c33a3ad2d2e5f9b0a0ed2b8f209e20","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-105443","io.kuber
netes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a","pid":2779,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a/rootfs","created":"2023-01-14T10:57:13.723267075Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-105443_84d1f443092d7d6e8972fbfd258f9adb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-prel
oad-105443","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be","pid":2786,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be/rootfs","created":"2023-01-14T10:57:14.064706786Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-llwpq_91739d92-c705-413a-9c93-bd3ff50a4bde","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-llwpq","
io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29","pid":3010,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29/rootfs","created":"2023-01-14T10:57:15.435196751Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-105443","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289","pid":2765,"status":"running","bun
dle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289/rootfs","created":"2023-01-14T10:57:13.717648473Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-105443_8957cb515cac201172c0da126ed92840","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-105443","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3","pid":2906,"s
tatus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3/rootfs","created":"2023-01-14T10:57:14.698581531Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-105443_bf9ef742a4e80f823bde6bfa4ea6ea87","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-105443","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I0114 10:57:15.448081 27483 cri.go:124] list returned 6 containers
I0114 10:57:15.448095 27483 cri.go:127] container: {ID:114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0 Status:running}
I0114 10:57:15.448112 27483 cri.go:129] skipping 114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0 - not in ps
I0114 10:57:15.448120 27483 cri.go:127] container: {ID:1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a Status:running}
I0114 10:57:15.448130 27483 cri.go:129] skipping 1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a - not in ps
I0114 10:57:15.448140 27483 cri.go:127] container: {ID:70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be Status:running}
I0114 10:57:15.448150 27483 cri.go:129] skipping 70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be - not in ps
I0114 10:57:15.448160 27483 cri.go:127] container: {ID:93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29 Status:created}
I0114 10:57:15.448169 27483 cri.go:133] skipping {93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29 created}: state = "created", want "paused"
I0114 10:57:15.448184 27483 cri.go:127] container: {ID:bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289 Status:running}
I0114 10:57:15.448193 27483 cri.go:129] skipping bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289 - not in ps
I0114 10:57:15.448200 27483 cri.go:127] container: {ID:fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3 Status:running}
I0114 10:57:15.448210 27483 cri.go:129] skipping fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3 - not in ps
I0114 10:57:15.448255 27483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0114 10:57:15.462725 27483 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I0114 10:57:15.462762 27483 kubeadm.go:627] restartCluster start
I0114 10:57:15.462811 27483 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0114 10:57:15.474718 27483 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0114 10:57:15.475372 27483 kubeconfig.go:92] found "test-preload-105443" server: "https://192.168.39.172:8443"
I0114 10:57:15.476281 27483 kapi.go:59] client config for test-preload-105443: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.key", CAFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 10:57:15.476988 27483 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0114 10:57:15.486392 27483 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I0114 10:57:15.486417 27483 kubeadm.go:1114] stopping kube-system containers ...
I0114 10:57:15.486430 27483 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0114 10:57:15.486486 27483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0114 10:57:15.524810 27483 cri.go:87] found id: "93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29"
I0114 10:57:15.524847 27483 cri.go:87] found id: ""
I0114 10:57:15.524854 27483 cri.go:232] Stopping containers: [93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29]
I0114 10:57:15.524897 27483 ssh_runner.go:195] Run: which crictl
I0114 10:57:15.529370 27483 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29
I0114 10:57:15.569312 27483 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0114 10:57:15.615525 27483 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0114 10:57:15.627591 27483 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Jan 14 10:55 /etc/kubernetes/admin.conf
-rw------- 1 root root 5658 Jan 14 10:55 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Jan 14 10:55 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5602 Jan 14 10:55 /etc/kubernetes/scheduler.conf
I0114 10:57:15.627641 27483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0114 10:57:15.636337 27483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0114 10:57:15.644489 27483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0114 10:57:15.652442 27483 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0114 10:57:15.652495 27483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0114 10:57:15.660754 27483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0114 10:57:15.668520 27483 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0114 10:57:15.668569 27483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0114 10:57:15.676769 27483 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0114 10:57:15.685489 27483 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0114 10:57:15.685513 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0114 10:57:15.821101 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0114 10:57:16.561318 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0114 10:57:16.912818 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0114 10:57:16.985058 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0114 10:57:17.056030 27483 api_server.go:51] waiting for apiserver process to appear ...
I0114 10:57:17.056107 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:17.572676 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:18.072130 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:18.572844 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:19.072025 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:19.572115 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:20.072473 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:20.572690 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:21.072787 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:21.572651 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:22.072387 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:22.572167 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:23.071994 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:23.572128 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:24.072921 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:24.572938 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:24.598617 27483 api_server.go:71] duration metric: took 7.542591348s to wait for apiserver process to appear ...
I0114 10:57:24.598638 27483 api_server.go:87] waiting for apiserver healthz status ...
I0114 10:57:24.598647 27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
I0114 10:57:24.599178 27483 api_server.go:268] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
I0114 10:57:25.100112 27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
I0114 10:57:30.100745 27483 api_server.go:268] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0114 10:57:30.599398 27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
I0114 10:57:34.404841 27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0114 10:57:34.404872 27483 api_server.go:102] status: https://192.168.39.172:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0114 10:57:34.600258 27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
I0114 10:57:34.614272 27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0114 10:57:34.614309 27483 api_server.go:102] status: https://192.168.39.172:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0114 10:57:35.100171 27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
I0114 10:57:35.116101 27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0114 10:57:35.116137 27483 api_server.go:102] status: https://192.168.39.172:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0114 10:57:35.600093 27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
I0114 10:57:35.610768 27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0114 10:57:35.610795 27483 api_server.go:102] status: https://192.168.39.172:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0114 10:57:36.099343 27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
I0114 10:57:36.106733 27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 200:
ok
I0114 10:57:36.113306 27483 api_server.go:140] control plane version: v1.24.6
I0114 10:57:36.113325 27483 api_server.go:130] duration metric: took 11.514682329s to wait for apiserver health ...
I0114 10:57:36.113332 27483 cni.go:95] Creating CNI manager for ""
I0114 10:57:36.113338 27483 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
I0114 10:57:36.115499 27483 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0114 10:57:36.117173 27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0114 10:57:36.127419 27483 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0114 10:57:36.144759 27483 system_pods.go:43] waiting for kube-system pods to appear ...
I0114 10:57:36.153830 27483 system_pods.go:59] 7 kube-system pods found
I0114 10:57:36.153858 27483 system_pods.go:61] "coredns-6d4b75cb6d-qrnsv" [d6b36277-faa5-4a95-8152-7c3bee0e7d0e] Running
I0114 10:57:36.153863 27483 system_pods.go:61] "etcd-test-preload-105443" [c83b44f0-7ce9-4416-bd67-f187352b1165] Running
I0114 10:57:36.153868 27483 system_pods.go:61] "kube-apiserver-test-preload-105443" [aad1462d-1f15-40a5-ac94-e61bf60ad44f] Pending
I0114 10:57:36.153876 27483 system_pods.go:61] "kube-controller-manager-test-preload-105443" [d9cc4f73-5345-45fa-9330-2ddafad96428] Pending
I0114 10:57:36.153880 27483 system_pods.go:61] "kube-proxy-llwpq" [91739d92-c705-413a-9c93-bd3ff50a4bde] Running
I0114 10:57:36.153884 27483 system_pods.go:61] "kube-scheduler-test-preload-105443" [86084f99-09ca-4e55-a94b-8d8fbf172cfd] Pending
I0114 10:57:36.153888 27483 system_pods.go:61] "storage-provisioner" [6605fd74-8f22-4580-a14b-c949d30b4406] Running
I0114 10:57:36.153892 27483 system_pods.go:74] duration metric: took 9.117201ms to wait for pod list to return data ...
I0114 10:57:36.153902 27483 node_conditions.go:102] verifying NodePressure condition ...
I0114 10:57:36.161282 27483 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0114 10:57:36.161314 27483 node_conditions.go:123] node cpu capacity is 2
I0114 10:57:36.161327 27483 node_conditions.go:105] duration metric: took 7.420477ms to run NodePressure ...
I0114 10:57:36.161346 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0114 10:57:36.438345 27483 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I0114 10:57:36.442581 27483 kubeadm.go:778] kubelet initialised
I0114 10:57:36.442603 27483 kubeadm.go:779] duration metric: took 4.234305ms waiting for restarted kubelet to initialise ...
I0114 10:57:36.442609 27483 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 10:57:36.449387 27483 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace to be "Ready" ...
I0114 10:57:36.461391 27483 pod_ready.go:92] pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:36.461405 27483 pod_ready.go:81] duration metric: took 11.998919ms waiting for pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace to be "Ready" ...
I0114 10:57:36.461412 27483 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:36.466860 27483 pod_ready.go:92] pod "etcd-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:36.466880 27483 pod_ready.go:81] duration metric: took 5.461777ms waiting for pod "etcd-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:36.466890 27483 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:38.488273 27483 pod_ready.go:102] pod "kube-apiserver-test-preload-105443" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
I0114 10:57:40.984696 27483 pod_ready.go:92] pod "kube-apiserver-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:40.984735 27483 pod_ready.go:81] duration metric: took 4.517835335s waiting for pod "kube-apiserver-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:40.984752 27483 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:42.997298 27483 pod_ready.go:102] pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace has status "Ready":"False"
I0114 10:57:44.498318 27483 pod_ready.go:92] pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:44.498351 27483 pod_ready.go:81] duration metric: took 3.513585128s waiting for pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:44.498363 27483 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-llwpq" in "kube-system" namespace to be "Ready" ...
I0114 10:57:46.512881 27483 pod_ready.go:102] pod "kube-proxy-llwpq" in "kube-system" namespace has status "Ready":"False"
I0114 10:57:47.509080 27483 pod_ready.go:97] error getting pod "kube-proxy-llwpq" in "kube-system" namespace (skipping!): pods "kube-proxy-llwpq" not found
I0114 10:57:47.509126 27483 pod_ready.go:81] duration metric: took 3.010744638s waiting for pod "kube-proxy-llwpq" in "kube-system" namespace to be "Ready" ...
E0114 10:57:47.509138 27483 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-llwpq" in "kube-system" namespace (skipping!): pods "kube-proxy-llwpq" not found
I0114 10:57:47.509146 27483 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:49.526235 27483 pod_ready.go:102] pod "kube-scheduler-test-preload-105443" in "kube-system" namespace has status "Ready":"False"
I0114 10:57:51.028129 27483 pod_ready.go:92] pod "kube-scheduler-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:51.028163 27483 pod_ready.go:81] duration metric: took 3.519009848s waiting for pod "kube-scheduler-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.028175 27483 pod_ready.go:38] duration metric: took 14.585558728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 10:57:51.028193 27483 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0114 10:57:51.039230 27483 ops.go:34] apiserver oom_adj: -16
I0114 10:57:51.039250 27483 kubeadm.go:631] restartCluster took 35.576481485s
I0114 10:57:51.039256 27483 kubeadm.go:398] StartCluster complete in 35.69431939s
I0114 10:57:51.039291 27483 settings.go:142] acquiring lock: {Name:mk3038dd5af57eb60f91199b2b839c5d07056ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:57:51.039394 27483 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15642-7076/kubeconfig
I0114 10:57:51.040222 27483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-7076/kubeconfig: {Name:mk46c671e06b6e8f61c0cf0252effe586db914b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:57:51.041069 27483 kapi.go:59] client config for test-preload-105443: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.key", CAFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 10:57:51.044149 27483 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-105443" rescaled to 1
I0114 10:57:51.044196 27483 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0114 10:57:51.046364 27483 out.go:177] * Verifying Kubernetes components...
I0114 10:57:51.044247 27483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0114 10:57:51.044265 27483 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
I0114 10:57:51.044476 27483 config.go:180] Loaded profile config "test-preload-105443": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I0114 10:57:51.047810 27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 10:57:51.047821 27483 addons.go:65] Setting storage-provisioner=true in profile "test-preload-105443"
I0114 10:57:51.047855 27483 addons.go:227] Setting addon storage-provisioner=true in "test-preload-105443"
W0114 10:57:51.047869 27483 addons.go:236] addon storage-provisioner should already be in state true
I0114 10:57:51.047829 27483 addons.go:65] Setting default-storageclass=true in profile "test-preload-105443"
I0114 10:57:51.047925 27483 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-105443"
I0114 10:57:51.047936 27483 host.go:66] Checking if "test-preload-105443" exists ...
I0114 10:57:51.048287 27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0114 10:57:51.048327 27483 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:57:51.048353 27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0114 10:57:51.048390 27483 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:57:51.062985 27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:38759
I0114 10:57:51.063084 27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43593
I0114 10:57:51.063424 27483 main.go:134] libmachine: () Calling .GetVersion
I0114 10:57:51.063668 27483 main.go:134] libmachine: () Calling .GetVersion
I0114 10:57:51.063963 27483 main.go:134] libmachine: Using API Version 1
I0114 10:57:51.063993 27483 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:57:51.064133 27483 main.go:134] libmachine: Using API Version 1
I0114 10:57:51.064155 27483 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:57:51.064349 27483 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:57:51.064451 27483 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:57:51.064540 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetState
I0114 10:57:51.064867 27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0114 10:57:51.064912 27483 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:57:51.066870 27483 kapi.go:59] client config for test-preload-105443: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.key", CAFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 10:57:51.080179 27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45167
I0114 10:57:51.080584 27483 main.go:134] libmachine: () Calling .GetVersion
I0114 10:57:51.080996 27483 main.go:134] libmachine: Using API Version 1
I0114 10:57:51.081020 27483 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:57:51.081340 27483 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:57:51.081528 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetState
I0114 10:57:51.083135 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:51.085165 27483 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0114 10:57:51.083919 27483 addons.go:227] Setting addon default-storageclass=true in "test-preload-105443"
W0114 10:57:51.086598 27483 addons.go:236] addon default-storageclass should already be in state true
I0114 10:57:51.086639 27483 host.go:66] Checking if "test-preload-105443" exists ...
I0114 10:57:51.086709 27483 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0114 10:57:51.086728 27483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0114 10:57:51.086747 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:51.086971 27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0114 10:57:51.087005 27483 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:57:51.089937 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:51.090467 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:51.090499 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:51.090659 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:51.090832 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:51.090971 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:51.091131 27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
I0114 10:57:51.104203 27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42793
I0114 10:57:51.104594 27483 main.go:134] libmachine: () Calling .GetVersion
I0114 10:57:51.105021 27483 main.go:134] libmachine: Using API Version 1
I0114 10:57:51.105049 27483 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:57:51.105329 27483 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:57:51.105813 27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0114 10:57:51.105849 27483 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:57:51.120471 27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39745
I0114 10:57:51.120848 27483 main.go:134] libmachine: () Calling .GetVersion
I0114 10:57:51.121289 27483 main.go:134] libmachine: Using API Version 1
I0114 10:57:51.121313 27483 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:57:51.121628 27483 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:57:51.121799 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetState
I0114 10:57:51.123271 27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
I0114 10:57:51.123503 27483 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0114 10:57:51.123520 27483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0114 10:57:51.123535 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
I0114 10:57:51.126237 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:51.126652 27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
I0114 10:57:51.126684 27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
I0114 10:57:51.126854 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
I0114 10:57:51.127025 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
I0114 10:57:51.127183 27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
I0114 10:57:51.127352 27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
I0114 10:57:51.229392 27483 node_ready.go:35] waiting up to 6m0s for node "test-preload-105443" to be "Ready" ...
I0114 10:57:51.229635 27483 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0114 10:57:51.232142 27483 node_ready.go:49] node "test-preload-105443" has status "Ready":"True"
I0114 10:57:51.232161 27483 node_ready.go:38] duration metric: took 2.729825ms waiting for node "test-preload-105443" to be "Ready" ...
I0114 10:57:51.232168 27483 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 10:57:51.237594 27483 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.243277 27483 pod_ready.go:92] pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:51.243290 27483 pod_ready.go:81] duration metric: took 5.677803ms waiting for pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.243298 27483 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.247704 27483 pod_ready.go:92] pod "etcd-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:51.247730 27483 pod_ready.go:81] duration metric: took 4.416982ms waiting for pod "etcd-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.247741 27483 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.252735 27483 pod_ready.go:92] pod "kube-apiserver-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:51.252765 27483 pod_ready.go:81] duration metric: took 5.004597ms waiting for pod "kube-apiserver-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.252776 27483 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.263259 27483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0114 10:57:51.278809 27483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0114 10:57:51.424521 27483 pod_ready.go:92] pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:51.424541 27483 pod_ready.go:81] duration metric: took 171.759236ms waiting for pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.424553 27483 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r2zx5" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.823589 27483 pod_ready.go:92] pod "kube-proxy-r2zx5" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:51.823614 27483 pod_ready.go:81] duration metric: took 399.0545ms waiting for pod "kube-proxy-r2zx5" in "kube-system" namespace to be "Ready" ...
I0114 10:57:51.823627 27483 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:52.149652 27483 main.go:134] libmachine: Making call to close driver server
I0114 10:57:52.149680 27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
I0114 10:57:52.149769 27483 main.go:134] libmachine: Making call to close driver server
I0114 10:57:52.149811 27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
I0114 10:57:52.149960 27483 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:57:52.149977 27483 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:57:52.149987 27483 main.go:134] libmachine: Making call to close driver server
I0114 10:57:52.149996 27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
I0114 10:57:52.150112 27483 main.go:134] libmachine: (test-preload-105443) DBG | Closing plugin on server side
I0114 10:57:52.150123 27483 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:57:52.150140 27483 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:57:52.150161 27483 main.go:134] libmachine: Making call to close driver server
I0114 10:57:52.150175 27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
I0114 10:57:52.150237 27483 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:57:52.150249 27483 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:57:52.150254 27483 main.go:134] libmachine: (test-preload-105443) DBG | Closing plugin on server side
I0114 10:57:52.150420 27483 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:57:52.150450 27483 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:57:52.150458 27483 main.go:134] libmachine: (test-preload-105443) DBG | Closing plugin on server side
I0114 10:57:52.150464 27483 main.go:134] libmachine: Making call to close driver server
I0114 10:57:52.150480 27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
I0114 10:57:52.150678 27483 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:57:52.150741 27483 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:57:52.154203 27483 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0114 10:57:52.155779 27483 addons.go:488] enableAddons completed in 1.111516995s
I0114 10:57:52.223817 27483 pod_ready.go:92] pod "kube-scheduler-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
I0114 10:57:52.223845 27483 pod_ready.go:81] duration metric: took 400.209326ms waiting for pod "kube-scheduler-test-preload-105443" in "kube-system" namespace to be "Ready" ...
I0114 10:57:52.223859 27483 pod_ready.go:38] duration metric: took 991.680111ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 10:57:52.223926 27483 api_server.go:51] waiting for apiserver process to appear ...
I0114 10:57:52.223983 27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:57:52.239184 27483 api_server.go:71] duration metric: took 1.194962526s to wait for apiserver process to appear ...
I0114 10:57:52.239212 27483 api_server.go:87] waiting for apiserver healthz status ...
I0114 10:57:52.239224 27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
I0114 10:57:52.244732 27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 200:
ok
I0114 10:57:52.245772 27483 api_server.go:140] control plane version: v1.24.6
I0114 10:57:52.245792 27483 api_server.go:130] duration metric: took 6.572562ms to wait for apiserver health ...
I0114 10:57:52.245800 27483 system_pods.go:43] waiting for kube-system pods to appear ...
I0114 10:57:52.426327 27483 system_pods.go:59] 7 kube-system pods found
I0114 10:57:52.426367 27483 system_pods.go:61] "coredns-6d4b75cb6d-qrnsv" [d6b36277-faa5-4a95-8152-7c3bee0e7d0e] Running
I0114 10:57:52.426372 27483 system_pods.go:61] "etcd-test-preload-105443" [c83b44f0-7ce9-4416-bd67-f187352b1165] Running
I0114 10:57:52.426377 27483 system_pods.go:61] "kube-apiserver-test-preload-105443" [aad1462d-1f15-40a5-ac94-e61bf60ad44f] Running
I0114 10:57:52.426383 27483 system_pods.go:61] "kube-controller-manager-test-preload-105443" [d9cc4f73-5345-45fa-9330-2ddafad96428] Running
I0114 10:57:52.426390 27483 system_pods.go:61] "kube-proxy-r2zx5" [248e7f72-fa03-440c-bbd2-004eb0bfa8de] Running
I0114 10:57:52.426396 27483 system_pods.go:61] "kube-scheduler-test-preload-105443" [86084f99-09ca-4e55-a94b-8d8fbf172cfd] Running
I0114 10:57:52.426402 27483 system_pods.go:61] "storage-provisioner" [6605fd74-8f22-4580-a14b-c949d30b4406] Running
I0114 10:57:52.426409 27483 system_pods.go:74] duration metric: took 180.602352ms to wait for pod list to return data ...
I0114 10:57:52.426428 27483 default_sa.go:34] waiting for default service account to be created ...
I0114 10:57:52.623880 27483 default_sa.go:45] found service account: "default"
I0114 10:57:52.623906 27483 default_sa.go:55] duration metric: took 197.472804ms for default service account to be created ...
I0114 10:57:52.623920 27483 system_pods.go:116] waiting for k8s-apps to be running ...
I0114 10:57:52.826204 27483 system_pods.go:86] 7 kube-system pods found
I0114 10:57:52.826241 27483 system_pods.go:89] "coredns-6d4b75cb6d-qrnsv" [d6b36277-faa5-4a95-8152-7c3bee0e7d0e] Running
I0114 10:57:52.826247 27483 system_pods.go:89] "etcd-test-preload-105443" [c83b44f0-7ce9-4416-bd67-f187352b1165] Running
I0114 10:57:52.826251 27483 system_pods.go:89] "kube-apiserver-test-preload-105443" [aad1462d-1f15-40a5-ac94-e61bf60ad44f] Running
I0114 10:57:52.826259 27483 system_pods.go:89] "kube-controller-manager-test-preload-105443" [d9cc4f73-5345-45fa-9330-2ddafad96428] Running
I0114 10:57:52.826263 27483 system_pods.go:89] "kube-proxy-r2zx5" [248e7f72-fa03-440c-bbd2-004eb0bfa8de] Running
I0114 10:57:52.826267 27483 system_pods.go:89] "kube-scheduler-test-preload-105443" [86084f99-09ca-4e55-a94b-8d8fbf172cfd] Running
I0114 10:57:52.826270 27483 system_pods.go:89] "storage-provisioner" [6605fd74-8f22-4580-a14b-c949d30b4406] Running
I0114 10:57:52.826276 27483 system_pods.go:126] duration metric: took 202.352112ms to wait for k8s-apps to be running ...
I0114 10:57:52.826282 27483 system_svc.go:44] waiting for kubelet service to be running ....
I0114 10:57:52.826325 27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 10:57:52.839706 27483 system_svc.go:56] duration metric: took 13.415483ms WaitForService to wait for kubelet.
I0114 10:57:52.839735 27483 kubeadm.go:573] duration metric: took 1.795518151s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0114 10:57:52.839750 27483 node_conditions.go:102] verifying NodePressure condition ...
I0114 10:57:53.023479 27483 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0114 10:57:53.023507 27483 node_conditions.go:123] node cpu capacity is 2
I0114 10:57:53.023517 27483 node_conditions.go:105] duration metric: took 183.763157ms to run NodePressure ...
I0114 10:57:53.023527 27483 start.go:217] waiting for startup goroutines ...
I0114 10:57:53.023818 27483 ssh_runner.go:195] Run: rm -f paused
I0114 10:57:53.072958 27483 start.go:536] kubectl: 1.26.0, cluster: 1.24.6 (minor skew: 2)
I0114 10:57:53.075182 27483 out.go:177]
W0114 10:57:53.076646 27483 out.go:239] ! /usr/local/bin/kubectl is version 1.26.0, which may have incompatibilities with Kubernetes 1.24.6.
I0114 10:57:53.078220 27483 out.go:177] - Want kubectl v1.24.6? Try 'minikube kubectl -- get pods -A'
I0114 10:57:53.079809 27483 out.go:177] * Done! kubectl is now configured to use "test-preload-105443" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
bae9a9cb3edc4 0bb39497ab33b 6 seconds ago Running kube-proxy 0 fdc652d992931
664155e1090a1 6e38f40d628db 6 seconds ago Running storage-provisioner 1 6a158a01ab6d1
3d2601ead597b a4ca41631cc7a 16 seconds ago Running coredns 1 86a98a78e9201
386806347631e c786c777a4e1c 17 seconds ago Running kube-scheduler 0 0a0b034e3e66a
d66943d237a6b aebe758cef4cd 24 seconds ago Running etcd 2 114ee96bae199
5748edecba614 c6c20157a4233 28 seconds ago Running kube-controller-manager 0 afc5cb9211d12
230feee6c17df 860f263331c95 29 seconds ago Running kube-apiserver 0 a9f225cfadb42
93d761323665a aebe758cef4cd 38 seconds ago Exited etcd 1 114ee96bae199
*
* ==> containerd <==
* -- Journal begins at Sat 2023-01-14 10:54:55 UTC, ends at Sat 2023-01-14 10:57:54 UTC. --
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.075842399Z" level=warning msg="cleaning up after shim disconnected" id=70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be namespace=k8s.io
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.075996708Z" level=info msg="cleaning up dead shim"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.094378001Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:57:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3698 runtime=io.containerd.runc.v2\n"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.095095571Z" level=info msg="TearDown network for sandbox \"70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be\" successfully"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.095198624Z" level=info msg="StopPodSandbox for \"70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be\" returns successfully"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.144717551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:6605fd74-8f22-4580-a14b-c949d30b4406,Namespace:kube-system,Attempt:0,}"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.173245211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.173405532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.173532882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.173814391Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a158a01ab6d1d8ecc44e43209bddb915dbfcb719796b69c67437bb1c08a45ce pid=3721 runtime=io.containerd.runc.v2
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.616629899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2zx5,Uid:248e7f72-fa03-440c-bbd2-004eb0bfa8de,Namespace:kube-system,Attempt:0,}"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.650379129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:6605fd74-8f22-4580-a14b-c949d30b4406,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a158a01ab6d1d8ecc44e43209bddb915dbfcb719796b69c67437bb1c08a45ce\""
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.658542016Z" level=info msg="CreateContainer within sandbox \"6a158a01ab6d1d8ecc44e43209bddb915dbfcb719796b69c67437bb1c08a45ce\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.669695534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.669792844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.669802270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.670100245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdc652d9929315d42e19c4c118e4753c48d42cdd74856c80c36ace83ae9e2036 pid=3766 runtime=io.containerd.runc.v2
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.685920829Z" level=info msg="CreateContainer within sandbox \"6a158a01ab6d1d8ecc44e43209bddb915dbfcb719796b69c67437bb1c08a45ce\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"664155e1090a11bad07b6a94168b9043016feb171ba515de914ecb06fd0c8f85\""
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.691221562Z" level=info msg="StartContainer for \"664155e1090a11bad07b6a94168b9043016feb171ba515de914ecb06fd0c8f85\""
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.782122459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2zx5,Uid:248e7f72-fa03-440c-bbd2-004eb0bfa8de,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdc652d9929315d42e19c4c118e4753c48d42cdd74856c80c36ace83ae9e2036\""
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.789335184Z" level=info msg="CreateContainer within sandbox \"fdc652d9929315d42e19c4c118e4753c48d42cdd74856c80c36ace83ae9e2036\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.829782964Z" level=info msg="StartContainer for \"664155e1090a11bad07b6a94168b9043016feb171ba515de914ecb06fd0c8f85\" returns successfully"
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.835675071Z" level=info msg="CreateContainer within sandbox \"fdc652d9929315d42e19c4c118e4753c48d42cdd74856c80c36ace83ae9e2036\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bae9a9cb3edc45cdc2f0c2f9fd9ad53d82e3c97492d5632fd7af21f805fa9ffd\""
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.839396252Z" level=info msg="StartContainer for \"bae9a9cb3edc45cdc2f0c2f9fd9ad53d82e3c97492d5632fd7af21f805fa9ffd\""
Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.978629407Z" level=info msg="StartContainer for \"bae9a9cb3edc45cdc2f0c2f9fd9ad53d82e3c97492d5632fd7af21f805fa9ffd\" returns successfully"
*
* ==> coredns [3d2601ead597bfe856431058224ec0abcc4744481797d307fa38e37060a509f8] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: test-preload-105443
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-105443
kubernetes.io/os=linux
minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
minikube.k8s.io/name=test-preload-105443
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_14T10_55_48_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 14 Jan 2023 10:55:44 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-105443
AcquireTime: <unset>
RenewTime: Sat, 14 Jan 2023 10:57:44 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 14 Jan 2023 10:57:34 +0000 Sat, 14 Jan 2023 10:55:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 14 Jan 2023 10:57:34 +0000 Sat, 14 Jan 2023 10:55:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 14 Jan 2023 10:57:34 +0000 Sat, 14 Jan 2023 10:55:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 14 Jan 2023 10:57:34 +0000 Sat, 14 Jan 2023 10:55:58 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.172
Hostname: test-preload-105443
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: a81c02a12ea74a15855eb0a6a0f839b7
System UUID: a81c02a1-2ea7-4a15-855e-b0a6a0f839b7
Boot ID: dfdfd74e-80fe-49d8-8ec9-2da740146b13
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.9
Kubelet Version: v1.24.6
Kube-Proxy Version: v1.24.6
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-qrnsv 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 114s
kube-system etcd-test-preload-105443 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 2m6s
kube-system kube-apiserver-test-preload-105443 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 19s
kube-system kube-controller-manager-test-preload-105443 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 19s
kube-system kube-proxy-r2zx5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7s
kube-system kube-scheduler-test-preload-105443 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 19s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 111s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 111s kube-proxy
Normal Starting 6s kube-proxy
Normal NodeAllocatableEnforced 2m16s kubelet Updated Node Allocatable limit across pods
Normal Starting 2m16s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m16s (x3 over 2m16s) kubelet Node test-preload-105443 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 2m16s (x3 over 2m16s) kubelet Node test-preload-105443 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 2m16s (x3 over 2m16s) kubelet Node test-preload-105443 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 2m6s kubelet Updated Node Allocatable limit across pods
Normal Starting 2m6s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m6s kubelet Node test-preload-105443 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m6s kubelet Node test-preload-105443 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m6s kubelet Node test-preload-105443 status is now: NodeHasSufficientPID
Normal NodeReady 116s kubelet Node test-preload-105443 status is now: NodeReady
Normal RegisteredNode 115s node-controller Node test-preload-105443 event: Registered Node test-preload-105443 in Controller
Normal Starting 37s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 34s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 31s (x8 over 37s) kubelet Node test-preload-105443 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 31s (x8 over 37s) kubelet Node test-preload-105443 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 31s (x7 over 37s) kubelet Node test-preload-105443 status is now: NodeHasSufficientPID
Normal RegisteredNode 8s node-controller Node test-preload-105443 event: Registered Node test-preload-105443 in Controller
*
* ==> dmesg <==
* [Jan14 10:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.071951] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.866409] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.109537] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.136036] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +5.045662] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[Jan14 10:55] systemd-fstab-generator[551]: Ignoring "noauto" for root device
[ +0.106246] systemd-fstab-generator[562]: Ignoring "noauto" for root device
[ +0.189681] systemd-fstab-generator[585]: Ignoring "noauto" for root device
[ +29.288739] systemd-fstab-generator[989]: Ignoring "noauto" for root device
[ +10.205938] systemd-fstab-generator[1378]: Ignoring "noauto" for root device
[Jan14 10:56] kauditd_printk_skb: 7 callbacks suppressed
[ +11.213698] kauditd_printk_skb: 20 callbacks suppressed
[Jan14 10:57] systemd-fstab-generator[2370]: Ignoring "noauto" for root device
[ +0.242305] systemd-fstab-generator[2395]: Ignoring "noauto" for root device
[ +0.156656] systemd-fstab-generator[2421]: Ignoring "noauto" for root device
[ +0.228990] systemd-fstab-generator[2443]: Ignoring "noauto" for root device
[ +6.084198] systemd-fstab-generator[3077]: Ignoring "noauto" for root device
*
* ==> etcd [93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29] <==
*
*
* ==> etcd [d66943d237a6b9fa76d5f665aeb42ce1f1cc93ae6f558d384f8ae46ec0ff5c9b] <==
* {"level":"info","ts":"2023-01-14T10:57:30.208Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"bbf1bb039b0d3451","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-14T10:57:30.209Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-01-14T10:57:30.213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 switched to configuration voters=(13542811178640421969)"}
{"level":"info","ts":"2023-01-14T10:57:30.214Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","added-peer-id":"bbf1bb039b0d3451","added-peer-peer-urls":["https://192.168.39.172:2380"]}
{"level":"info","ts":"2023-01-14T10:57:30.214Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"bbf1bb039b0d3451","initial-advertise-peer-urls":["https://192.168.39.172:2380"],"listen-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.172:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.172:2380"}
{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.172:2380"}
{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 is starting a new election at term 2"}
{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became pre-candidate at term 2"}
{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgPreVoteResp from bbf1bb039b0d3451 at term 2"}
{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became candidate at term 3"}
{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgVoteResp from bbf1bb039b0d3451 at term 3"}
{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became leader at term 3"}
{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbf1bb039b0d3451 elected leader bbf1bb039b0d3451 at term 3"}
{"level":"info","ts":"2023-01-14T10:57:31.797Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"bbf1bb039b0d3451","local-member-attributes":"{Name:test-preload-105443 ClientURLs:[https://192.168.39.172:2379]}","request-path":"/0/members/bbf1bb039b0d3451/attributes","cluster-id":"a5f5c7bb54d744d4","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-14T10:57:31.797Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:57:31.799Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.172:2379"}
{"level":"info","ts":"2023-01-14T10:57:31.799Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:57:31.800Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-14T10:57:31.800Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-14T10:57:31.800Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 10:57:54 up 3 min, 0 users, load average: 1.21, 0.45, 0.17
Linux test-preload-105443 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [230feee6c17df41d27d8473ecd00ccc00a5a82455941f4c899a37e0c53cf96be] <==
* I0114 10:57:34.378134 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0114 10:57:34.378200 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0114 10:57:34.378476 1 crd_finalizer.go:266] Starting CRDFinalizer
I0114 10:57:34.378628 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0114 10:57:34.389527 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0114 10:57:34.438795 1 shared_informer.go:262] Caches are synced for crd-autoregister
E0114 10:57:34.447651 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0114 10:57:34.448497 1 shared_informer.go:262] Caches are synced for node_authorizer
I0114 10:57:34.519629 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0114 10:57:34.519666 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0114 10:57:34.525991 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0114 10:57:34.526816 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0114 10:57:34.527807 1 cache.go:39] Caches are synced for autoregister controller
I0114 10:57:34.531642 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0114 10:57:34.936614 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0114 10:57:35.336999 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0114 10:57:36.301293 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0114 10:57:36.314722 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0114 10:57:36.384083 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0114 10:57:36.408061 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0114 10:57:36.421998 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0114 10:57:46.932384 1 controller.go:611] quota admission added evaluator for: endpoints
I0114 10:57:46.972336 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0114 10:57:46.978283 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0114 10:57:47.310690 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-controller-manager [5748edecba614c98a19e53d1e5078320f834903e891b01af88393d96737b5ed7] <==
* I0114 10:57:46.944965 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0114 10:57:46.945232 1 shared_informer.go:262] Caches are synced for service account
I0114 10:57:46.945315 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0114 10:57:46.945872 1 event.go:294] "Event occurred" object="test-preload-105443" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-105443 event: Registered Node test-preload-105443 in Controller"
I0114 10:57:46.952501 1 shared_informer.go:262] Caches are synced for job
I0114 10:57:46.962621 1 shared_informer.go:262] Caches are synced for HPA
I0114 10:57:46.968511 1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0114 10:57:47.006548 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I0114 10:57:47.006795 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I0114 10:57:47.008694 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0114 10:57:47.014241 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I0114 10:57:47.016984 1 shared_informer.go:262] Caches are synced for PV protection
I0114 10:57:47.017397 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0114 10:57:47.019183 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-llwpq"
I0114 10:57:47.050541 1 shared_informer.go:262] Caches are synced for persistent volume
I0114 10:57:47.055586 1 shared_informer.go:262] Caches are synced for expand
I0114 10:57:47.055614 1 shared_informer.go:262] Caches are synced for attach detach
I0114 10:57:47.094218 1 shared_informer.go:262] Caches are synced for resource quota
I0114 10:57:47.164387 1 shared_informer.go:262] Caches are synced for disruption
I0114 10:57:47.164498 1 disruption.go:371] Sending events to api server.
I0114 10:57:47.172124 1 shared_informer.go:262] Caches are synced for resource quota
I0114 10:57:47.282849 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r2zx5"
I0114 10:57:47.545095 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 10:57:47.545136 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0114 10:57:47.608658 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-proxy [bae9a9cb3edc45cdc2f0c2f9fd9ad53d82e3c97492d5632fd7af21f805fa9ffd] <==
* I0114 10:57:48.083512 1 node.go:163] Successfully retrieved node IP: 192.168.39.172
I0114 10:57:48.083680 1 server_others.go:138] "Detected node IP" address="192.168.39.172"
I0114 10:57:48.083796 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0114 10:57:48.135257 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0114 10:57:48.135274 1 server_others.go:206] "Using iptables Proxier"
I0114 10:57:48.135669 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:57:48.136778 1 server.go:661] "Version info" version="v1.24.6"
I0114 10:57:48.136822 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:57:48.138132 1 config.go:317] "Starting service config controller"
I0114 10:57:48.138202 1 shared_informer.go:255] Waiting for caches to sync for service config
I0114 10:57:48.138337 1 config.go:226] "Starting endpoint slice config controller"
I0114 10:57:48.138561 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0114 10:57:48.140222 1 config.go:444] "Starting node config controller"
I0114 10:57:48.140256 1 shared_informer.go:255] Waiting for caches to sync for node config
I0114 10:57:48.238374 1 shared_informer.go:262] Caches are synced for service config
I0114 10:57:48.239585 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0114 10:57:48.241160 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [386806347631e8ca6820b1913270cc0024734ee3aa46c7da2863a37081254fcd] <==
* I0114 10:57:37.324007 1 serving.go:348] Generated self-signed cert in-memory
I0114 10:57:37.674802 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.6"
I0114 10:57:37.674921 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:57:37.683402 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0114 10:57:37.683733 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0114 10:57:37.683975 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0114 10:57:37.684159 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0114 10:57:37.684332 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0114 10:57:37.684568 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 10:57:37.684721 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0114 10:57:37.684847 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0114 10:57:37.784336 1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0114 10:57:37.784757 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 10:57:37.785302 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Sat 2023-01-14 10:54:55 UTC, ends at Sat 2023-01-14 10:57:54 UTC. --
Jan 14 10:57:36 test-preload-105443 kubelet[3083]: E0114 10:57:36.822649 3083 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-proxy:v1.24.4\": failed to prepare extraction snapshot \"extract-800183515-fmUU sha256:3479df19c04c0f4516e7034bb7291daf7fb549f04da3393c0b786f8db240d0dc\": failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2776791587 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists" image="k8s.gcr.io/kube-proxy:v1.24.4"
Jan 14 10:57:36 test-preload-105443 kubelet[3083]: E0114 10:57:36.822777 3083 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-
access-rsfvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-llwpq_kube-system(91739d92-c705-413a-9c93-bd3ff50a4bde): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "k8s.gcr.io/kube-proxy:v1.24.4": failed to prepare extraction snapshot "extract-800183515-fmUU sha256:3479df19c04c0f4516e7034bb7291daf7fb549f04da3393c0b786f8db240d0dc": failed to rename: rename /mnt/vda1/var/lib/conta
inerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2776791587 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists
Jan 14 10:57:36 test-preload-105443 kubelet[3083]: E0114 10:57:36.822816 3083 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"k8s.gcr.io/kube-proxy:v1.24.4\\\": failed to prepare extraction snapshot \\\"extract-800183515-fmUU sha256:3479df19c04c0f4516e7034bb7291daf7fb549f04da3393c0b786f8db240d0dc\\\": failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2776791587 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists\"" pod="kube-system/kube-proxy-llwpq" podUID=91739d92-c705-413a-9c93-bd3ff50a4bde
Jan 14 10:57:37 test-preload-105443 kubelet[3083]: I0114 10:57:37.149748 3083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=84d1f443092d7d6e8972fbfd258f9adb path="/var/lib/kubelet/pods/84d1f443092d7d6e8972fbfd258f9adb/volumes"
Jan 14 10:57:37 test-preload-105443 kubelet[3083]: I0114 10:57:37.155243 3083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8957cb515cac201172c0da126ed92840 path="/var/lib/kubelet/pods/8957cb515cac201172c0da126ed92840/volumes"
Jan 14 10:57:37 test-preload-105443 kubelet[3083]: I0114 10:57:37.156762 3083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bf9ef742a4e80f823bde6bfa4ea6ea87 path="/var/lib/kubelet/pods/bf9ef742a4e80f823bde6bfa4ea6ea87/volumes"
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116275 3083 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsfvn\" (UniqueName: \"kubernetes.io/projected/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-api-access-rsfvn\") pod \"91739d92-c705-413a-9c93-bd3ff50a4bde\" (UID: \"91739d92-c705-413a-9c93-bd3ff50a4bde\") "
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116317 3083 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-proxy\") pod \"91739d92-c705-413a-9c93-bd3ff50a4bde\" (UID: \"91739d92-c705-413a-9c93-bd3ff50a4bde\") "
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116335 3083 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-lib-modules\") pod \"91739d92-c705-413a-9c93-bd3ff50a4bde\" (UID: \"91739d92-c705-413a-9c93-bd3ff50a4bde\") "
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116361 3083 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-xtables-lock\") pod \"91739d92-c705-413a-9c93-bd3ff50a4bde\" (UID: \"91739d92-c705-413a-9c93-bd3ff50a4bde\") "
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116519 3083 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "91739d92-c705-413a-9c93-bd3ff50a4bde" (UID: "91739d92-c705-413a-9c93-bd3ff50a4bde"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116952 3083 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "91739d92-c705-413a-9c93-bd3ff50a4bde" (UID: "91739d92-c705-413a-9c93-bd3ff50a4bde"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: W0114 10:57:47.117675 3083 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/91739d92-c705-413a-9c93-bd3ff50a4bde/volumes/kubernetes.io~configmap/kube-proxy: clearQuota called, but quotas disabled
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.118071 3083 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "91739d92-c705-413a-9c93-bd3ff50a4bde" (UID: "91739d92-c705-413a-9c93-bd3ff50a4bde"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.125185 3083 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-api-access-rsfvn" (OuterVolumeSpecName: "kube-api-access-rsfvn") pod "91739d92-c705-413a-9c93-bd3ff50a4bde" (UID: "91739d92-c705-413a-9c93-bd3ff50a4bde"). InnerVolumeSpecName "kube-api-access-rsfvn". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.216891 3083 reconciler.go:384] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-lib-modules\") on node \"test-preload-105443\" DevicePath \"\""
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.216944 3083 reconciler.go:384] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-xtables-lock\") on node \"test-preload-105443\" DevicePath \"\""
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.216958 3083 reconciler.go:384] "Volume detached for volume \"kube-api-access-rsfvn\" (UniqueName: \"kubernetes.io/projected/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-api-access-rsfvn\") on node \"test-preload-105443\" DevicePath \"\""
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.216974 3083 reconciler.go:384] "Volume detached for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-proxy\") on node \"test-preload-105443\" DevicePath \"\""
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.306779 3083 topology_manager.go:200] "Topology Admit Handler"
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.418162 3083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/248e7f72-fa03-440c-bbd2-004eb0bfa8de-xtables-lock\") pod \"kube-proxy-r2zx5\" (UID: \"248e7f72-fa03-440c-bbd2-004eb0bfa8de\") " pod="kube-system/kube-proxy-r2zx5"
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.418363 3083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/248e7f72-fa03-440c-bbd2-004eb0bfa8de-lib-modules\") pod \"kube-proxy-r2zx5\" (UID: \"248e7f72-fa03-440c-bbd2-004eb0bfa8de\") " pod="kube-system/kube-proxy-r2zx5"
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.418485 3083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvgd\" (UniqueName: \"kubernetes.io/projected/248e7f72-fa03-440c-bbd2-004eb0bfa8de-kube-api-access-lrvgd\") pod \"kube-proxy-r2zx5\" (UID: \"248e7f72-fa03-440c-bbd2-004eb0bfa8de\") " pod="kube-system/kube-proxy-r2zx5"
Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.418575 3083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/248e7f72-fa03-440c-bbd2-004eb0bfa8de-kube-proxy\") pod \"kube-proxy-r2zx5\" (UID: \"248e7f72-fa03-440c-bbd2-004eb0bfa8de\") " pod="kube-system/kube-proxy-r2zx5"
Jan 14 10:57:49 test-preload-105443 kubelet[3083]: I0114 10:57:49.147266 3083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=91739d92-c705-413a-9c93-bd3ff50a4bde path="/var/lib/kubelet/pods/91739d92-c705-413a-9c93-bd3ff50a4bde/volumes"
*
* ==> storage-provisioner [664155e1090a11bad07b6a94168b9043016feb171ba515de914ecb06fd0c8f85] <==
* I0114 10:57:47.874252 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0114 10:57:47.893618 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0114 10:57:47.894386 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-105443 -n test-preload-105443
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-105443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context test-preload-105443 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context test-preload-105443 describe pod : exit status 1 (48.988125ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context test-preload-105443 describe pod : exit status 1
helpers_test.go:175: Cleaning up "test-preload-105443" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-105443
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-105443: (1.167204291s)
--- FAIL: TestPreload (192.53s)