=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-617863 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0203 22:50:07.892467 11145 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/addons-032036/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-617863 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (2m1.773338976s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-617863 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-617863 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.218626497s)
preload_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p test-preload-617863
E0203 22:52:04.847592 11145 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/addons-032036/client.crt: no such file or directory
E0203 22:52:07.173041 11145 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/ingress-addon-legacy-561669/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-617863: (1m31.530047046s)
preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-617863 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd
E0203 22:53:31.849199 11145 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/functional-188817/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-617863 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd: (2m4.519867955s)
preload_test.go:80: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-617863 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.4 6cab9d1bed1be 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.4 1f99cb6da9a82 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-scheduler v1.24.4 03fa22539fc1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-02-03 22:54:43.318487194 +0000 UTC m=+2764.073562464
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-617863 -n test-preload-617863
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-617863 logs -n 25
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-893586 ssh -n | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:31 UTC | 03 Feb 23 22:31 UTC |
| | multinode-893586-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-893586 ssh -n multinode-893586 sudo cat | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:31 UTC | 03 Feb 23 22:31 UTC |
| | /home/docker/cp-test_multinode-893586-m03_multinode-893586.txt | | | | | |
| cp | multinode-893586 cp multinode-893586-m03:/home/docker/cp-test.txt | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:31 UTC | 03 Feb 23 22:31 UTC |
| | multinode-893586-m02:/home/docker/cp-test_multinode-893586-m03_multinode-893586-m02.txt | | | | | |
| ssh | multinode-893586 ssh -n | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:31 UTC | 03 Feb 23 22:31 UTC |
| | multinode-893586-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-893586 ssh -n multinode-893586-m02 sudo cat | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:31 UTC | 03 Feb 23 22:31 UTC |
| | /home/docker/cp-test_multinode-893586-m03_multinode-893586-m02.txt | | | | | |
| node | multinode-893586 node stop m03 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:31 UTC | 03 Feb 23 22:31 UTC |
| node | multinode-893586 node start | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:31 UTC | 03 Feb 23 22:32 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-893586 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:32 UTC | |
| stop | -p multinode-893586 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:32 UTC | 03 Feb 23 22:35 UTC |
| start | -p multinode-893586 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:35 UTC | 03 Feb 23 22:40 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-893586 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:40 UTC | |
| node | multinode-893586 node delete | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:40 UTC | 03 Feb 23 22:40 UTC |
| | m03 | | | | | |
| stop | multinode-893586 stop | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:40 UTC | 03 Feb 23 22:43 UTC |
| start | -p multinode-893586 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:43 UTC | 03 Feb 23 22:48 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-893586 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:48 UTC | |
| start | -p multinode-893586-m02 | multinode-893586-m02 | jenkins | v1.29.0 | 03 Feb 23 22:48 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-893586-m03 | multinode-893586-m03 | jenkins | v1.29.0 | 03 Feb 23 22:48 UTC | 03 Feb 23 22:49 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-893586 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:49 UTC | |
| delete | -p multinode-893586-m03 | multinode-893586-m03 | jenkins | v1.29.0 | 03 Feb 23 22:49 UTC | 03 Feb 23 22:49 UTC |
| delete | -p multinode-893586 | multinode-893586 | jenkins | v1.29.0 | 03 Feb 23 22:49 UTC | 03 Feb 23 22:49 UTC |
| start | -p test-preload-617863 | test-preload-617863 | jenkins | v1.29.0 | 03 Feb 23 22:49 UTC | 03 Feb 23 22:51 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-617863 | test-preload-617863 | jenkins | v1.29.0 | 03 Feb 23 22:51 UTC | 03 Feb 23 22:51 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| stop | -p test-preload-617863 | test-preload-617863 | jenkins | v1.29.0 | 03 Feb 23 22:51 UTC | 03 Feb 23 22:52 UTC |
| start | -p test-preload-617863 | test-preload-617863 | jenkins | v1.29.0 | 03 Feb 23 22:52 UTC | 03 Feb 23 22:54 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p test-preload-617863 -- sudo | test-preload-617863 | jenkins | v1.29.0 | 03 Feb 23 22:54 UTC | 03 Feb 23 22:54 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/02/03 22:52:38
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.19.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0203 22:52:38.603285 29404 out.go:296] Setting OutFile to fd 1 ...
I0203 22:52:38.603474 29404 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0203 22:52:38.603484 29404 out.go:309] Setting ErrFile to fd 2...
I0203 22:52:38.603489 29404 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0203 22:52:38.603595 29404 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-4071/.minikube/bin
I0203 22:52:38.604119 29404 out.go:303] Setting JSON to false
I0203 22:52:38.604886 29404 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5706,"bootTime":1675459053,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0203 22:52:38.604939 29404 start.go:135] virtualization: kvm guest
I0203 22:52:38.607320 29404 out.go:177] * [test-preload-617863] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0203 22:52:38.608590 29404 notify.go:220] Checking for updates...
I0203 22:52:38.609951 29404 out.go:177] - MINIKUBE_LOCATION=15770
I0203 22:52:38.611239 29404 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0203 22:52:38.612542 29404 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15770-4071/kubeconfig
I0203 22:52:38.613946 29404 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-4071/.minikube
I0203 22:52:38.615469 29404 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0203 22:52:38.616715 29404 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0203 22:52:38.618216 29404 config.go:180] Loaded profile config "test-preload-617863": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0203 22:52:38.618591 29404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0203 22:52:38.618661 29404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 22:52:38.633338 29404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
I0203 22:52:38.633657 29404 main.go:141] libmachine: () Calling .GetVersion
I0203 22:52:38.634129 29404 main.go:141] libmachine: Using API Version 1
I0203 22:52:38.634162 29404 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 22:52:38.634444 29404 main.go:141] libmachine: () Calling .GetMachineName
I0203 22:52:38.634640 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:52:38.636271 29404 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
I0203 22:52:38.637446 29404 driver.go:365] Setting default libvirt URI to qemu:///system
I0203 22:52:38.637750 29404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0203 22:52:38.637782 29404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 22:52:38.651375 29404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
I0203 22:52:38.651690 29404 main.go:141] libmachine: () Calling .GetVersion
I0203 22:52:38.652103 29404 main.go:141] libmachine: Using API Version 1
I0203 22:52:38.652127 29404 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 22:52:38.652385 29404 main.go:141] libmachine: () Calling .GetMachineName
I0203 22:52:38.652524 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:52:38.683783 29404 out.go:177] * Using the kvm2 driver based on existing profile
I0203 22:52:38.685126 29404 start.go:296] selected driver: kvm2
I0203 22:52:38.685144 29404 start.go:857] validating driver "kvm2" against &{Name:test-preload-617863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15728/minikube-v1.29.0-1675184411-15728-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-617863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0203 22:52:38.685228 29404 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0203 22:52:38.685796 29404 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0203 22:52:38.685858 29404 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15770-4071/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0203 22:52:38.699143 29404 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0203 22:52:38.699428 29404 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0203 22:52:38.699451 29404 cni.go:84] Creating CNI manager for ""
I0203 22:52:38.699456 29404 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0203 22:52:38.699463 29404 start_flags.go:319] config:
{Name:test-preload-617863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15728/minikube-v1.29.0-1675184411-15728-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-617863 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0203 22:52:38.699551 29404 iso.go:125] acquiring lock: {Name:mke0f6935fee3a47811980702bf8341cbbac5c03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0203 22:52:38.702061 29404 out.go:177] * Starting control plane node test-preload-617863 in cluster test-preload-617863
I0203 22:52:38.703467 29404 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0203 22:52:39.328984 29404 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0203 22:52:39.329022 29404 cache.go:57] Caching tarball of preloaded images
I0203 22:52:39.329186 29404 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0203 22:52:39.331250 29404 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0203 22:52:39.332313 29404 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0203 22:52:39.489145 29404 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15770-4071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0203 22:52:58.156410 29404 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0203 22:52:58.156494 29404 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15770-4071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0203 22:52:59.022115 29404 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0203 22:52:59.022259 29404 profile.go:148] Saving config to /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/config.json ...
I0203 22:52:59.022503 29404 cache.go:193] Successfully downloaded all kic artifacts
I0203 22:52:59.022538 29404 start.go:364] acquiring machines lock for test-preload-617863: {Name:mk4b399c5c9f324b1648b408074bc3189bb07f56 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0203 22:52:59.022608 29404 start.go:368] acquired machines lock for "test-preload-617863" in 49.001µs
I0203 22:52:59.022626 29404 start.go:96] Skipping create...Using existing machine configuration
I0203 22:52:59.022633 29404 fix.go:55] fixHost starting:
I0203 22:52:59.022897 29404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0203 22:52:59.022938 29404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 22:52:59.036938 29404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
I0203 22:52:59.037303 29404 main.go:141] libmachine: () Calling .GetVersion
I0203 22:52:59.037761 29404 main.go:141] libmachine: Using API Version 1
I0203 22:52:59.037786 29404 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 22:52:59.038073 29404 main.go:141] libmachine: () Calling .GetMachineName
I0203 22:52:59.038221 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:52:59.038379 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetState
I0203 22:52:59.039902 29404 fix.go:103] recreateIfNeeded on test-preload-617863: state=Stopped err=<nil>
I0203 22:52:59.039927 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
W0203 22:52:59.040060 29404 fix.go:129] unexpected machine state, will restart: <nil>
I0203 22:52:59.042366 29404 out.go:177] * Restarting existing kvm2 VM for "test-preload-617863" ...
I0203 22:52:59.043876 29404 main.go:141] libmachine: (test-preload-617863) Calling .Start
I0203 22:52:59.044034 29404 main.go:141] libmachine: (test-preload-617863) Ensuring networks are active...
I0203 22:52:59.044672 29404 main.go:141] libmachine: (test-preload-617863) Ensuring network default is active
I0203 22:52:59.044994 29404 main.go:141] libmachine: (test-preload-617863) Ensuring network mk-test-preload-617863 is active
I0203 22:52:59.045308 29404 main.go:141] libmachine: (test-preload-617863) Getting domain xml...
I0203 22:52:59.045909 29404 main.go:141] libmachine: (test-preload-617863) Creating domain...
I0203 22:53:00.247258 29404 main.go:141] libmachine: (test-preload-617863) Waiting to get IP...
I0203 22:53:00.248056 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:00.248498 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:00.248551 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:00.248485 29442 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
I0203 22:53:00.512781 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:00.513201 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:00.513225 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:00.513166 29442 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
I0203 22:53:00.896795 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:00.897193 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:00.897222 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:00.897160 29442 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
I0203 22:53:01.321616 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:01.322014 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:01.322045 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:01.321961 29442 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
I0203 22:53:01.796073 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:01.796455 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:01.796480 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:01.796398 29442 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
I0203 22:53:02.385148 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:02.385553 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:02.385579 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:02.385527 29442 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
I0203 22:53:03.221312 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:03.221691 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:03.221718 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:03.221651 29442 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
I0203 22:53:03.969592 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:03.970050 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:03.970076 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:03.970006 29442 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
I0203 22:53:04.959359 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:04.959794 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:04.959822 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:04.959751 29442 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
I0203 22:53:06.150774 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:06.151164 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:06.151196 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:06.151119 29442 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
I0203 22:53:07.830118 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:07.830520 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:07.830550 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:07.830475 29442 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
I0203 22:53:10.177996 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:10.178427 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:10.178465 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:10.178371 29442 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
I0203 22:53:13.548760 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:13.549147 29404 main.go:141] libmachine: (test-preload-617863) DBG | unable to find current IP address of domain test-preload-617863 in network mk-test-preload-617863
I0203 22:53:13.549170 29404 main.go:141] libmachine: (test-preload-617863) DBG | I0203 22:53:13.549101 29442 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
I0203 22:53:16.670594 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.671012 29404 main.go:141] libmachine: (test-preload-617863) Found IP for machine: 192.168.39.125
I0203 22:53:16.671040 29404 main.go:141] libmachine: (test-preload-617863) Reserving static IP address...
I0203 22:53:16.671058 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has current primary IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.671471 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "test-preload-617863", mac: "52:54:00:50:a8:f9", ip: "192.168.39.125"} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:16.671496 29404 main.go:141] libmachine: (test-preload-617863) DBG | skip adding static IP to network mk-test-preload-617863 - found existing host DHCP lease matching {name: "test-preload-617863", mac: "52:54:00:50:a8:f9", ip: "192.168.39.125"}
I0203 22:53:16.671508 29404 main.go:141] libmachine: (test-preload-617863) Reserved static IP address: 192.168.39.125
I0203 22:53:16.671524 29404 main.go:141] libmachine: (test-preload-617863) Waiting for SSH to be available...
I0203 22:53:16.671543 29404 main.go:141] libmachine: (test-preload-617863) DBG | Getting to WaitForSSH function...
I0203 22:53:16.673735 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.674194 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:16.674225 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.674369 29404 main.go:141] libmachine: (test-preload-617863) DBG | Using SSH client type: external
I0203 22:53:16.674384 29404 main.go:141] libmachine: (test-preload-617863) DBG | Using SSH private key: /home/jenkins/minikube-integration/15770-4071/.minikube/machines/test-preload-617863/id_rsa (-rw-------)
I0203 22:53:16.674402 29404 main.go:141] libmachine: (test-preload-617863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15770-4071/.minikube/machines/test-preload-617863/id_rsa -p 22] /usr/bin/ssh <nil>}
I0203 22:53:16.674411 29404 main.go:141] libmachine: (test-preload-617863) DBG | About to run SSH command:
I0203 22:53:16.674418 29404 main.go:141] libmachine: (test-preload-617863) DBG | exit 0
I0203 22:53:16.769737 29404 main.go:141] libmachine: (test-preload-617863) DBG | SSH cmd err, output: <nil>:
I0203 22:53:16.770058 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetConfigRaw
I0203 22:53:16.770704 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetIP
I0203 22:53:16.773176 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.773495 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:16.773520 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.773718 29404 profile.go:148] Saving config to /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/config.json ...
I0203 22:53:16.773870 29404 machine.go:88] provisioning docker machine ...
I0203 22:53:16.773886 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:53:16.774081 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetMachineName
I0203 22:53:16.774250 29404 buildroot.go:166] provisioning hostname "test-preload-617863"
I0203 22:53:16.774266 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetMachineName
I0203 22:53:16.774404 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:53:16.776573 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.776876 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:16.776903 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.777016 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:53:16.777183 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:16.777335 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:16.777494 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:53:16.777664 29404 main.go:141] libmachine: Using SSH client type: native
I0203 22:53:16.777828 29404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 192.168.39.125 22 <nil> <nil>}
I0203 22:53:16.777842 29404 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-617863 && echo "test-preload-617863" | sudo tee /etc/hostname
I0203 22:53:16.916747 29404 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-617863
I0203 22:53:16.916776 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:53:16.919380 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.919750 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:16.919780 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:16.919920 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:53:16.920112 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:16.920258 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:16.920401 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:53:16.920561 29404 main.go:141] libmachine: Using SSH client type: native
I0203 22:53:16.920697 29404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 192.168.39.125 22 <nil> <nil>}
I0203 22:53:16.920714 29404 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-617863' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-617863/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-617863' | sudo tee -a /etc/hosts;
fi
fi
I0203 22:53:17.058106 29404 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0203 22:53:17.058134 29404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15770-4071/.minikube CaCertPath:/home/jenkins/minikube-integration/15770-4071/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15770-4071/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15770-4071/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15770-4071/.minikube}
I0203 22:53:17.058156 29404 buildroot.go:174] setting up certificates
I0203 22:53:17.058170 29404 provision.go:83] configureAuth start
I0203 22:53:17.058179 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetMachineName
I0203 22:53:17.058445 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetIP
I0203 22:53:17.061036 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.061360 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:17.061384 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.061528 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:53:17.063437 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.063695 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:17.063724 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.063866 29404 provision.go:138] copyHostCerts
I0203 22:53:17.063922 29404 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-4071/.minikube/ca.pem, removing ...
I0203 22:53:17.063934 29404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-4071/.minikube/ca.pem
I0203 22:53:17.064000 29404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-4071/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15770-4071/.minikube/ca.pem (1078 bytes)
I0203 22:53:17.064090 29404 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-4071/.minikube/cert.pem, removing ...
I0203 22:53:17.064100 29404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-4071/.minikube/cert.pem
I0203 22:53:17.064126 29404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-4071/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15770-4071/.minikube/cert.pem (1123 bytes)
I0203 22:53:17.064172 29404 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-4071/.minikube/key.pem, removing ...
I0203 22:53:17.064179 29404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-4071/.minikube/key.pem
I0203 22:53:17.064197 29404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-4071/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15770-4071/.minikube/key.pem (1679 bytes)
I0203 22:53:17.064235 29404 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15770-4071/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15770-4071/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15770-4071/.minikube/certs/ca-key.pem org=jenkins.test-preload-617863 san=[192.168.39.125 192.168.39.125 localhost 127.0.0.1 minikube test-preload-617863]
I0203 22:53:17.252199 29404 provision.go:172] copyRemoteCerts
I0203 22:53:17.252246 29404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0203 22:53:17.252266 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:53:17.254814 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.255094 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:17.255122 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.255274 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:53:17.255457 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:17.255624 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:53:17.255741 29404 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/machines/test-preload-617863/id_rsa Username:docker}
I0203 22:53:17.347480 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0203 22:53:17.368454 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0203 22:53:17.388819 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0203 22:53:17.409572 29404 provision.go:86] duration metric: configureAuth took 351.393879ms
I0203 22:53:17.409590 29404 buildroot.go:189] setting minikube options for container-runtime
I0203 22:53:17.409747 29404 config.go:180] Loaded profile config "test-preload-617863": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0203 22:53:17.409799 29404 machine.go:91] provisioned docker machine in 635.878668ms
I0203 22:53:17.409823 29404 start.go:300] post-start starting for "test-preload-617863" (driver="kvm2")
I0203 22:53:17.409833 29404 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0203 22:53:17.409860 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:53:17.410143 29404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0203 22:53:17.410170 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:53:17.412544 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.412884 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:17.412912 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.413026 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:53:17.413221 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:17.413370 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:53:17.413505 29404 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/machines/test-preload-617863/id_rsa Username:docker}
I0203 22:53:17.508157 29404 ssh_runner.go:195] Run: cat /etc/os-release
I0203 22:53:17.512253 29404 info.go:137] Remote host: Buildroot 2021.02.12
I0203 22:53:17.512281 29404 filesync.go:126] Scanning /home/jenkins/minikube-integration/15770-4071/.minikube/addons for local assets ...
I0203 22:53:17.512353 29404 filesync.go:126] Scanning /home/jenkins/minikube-integration/15770-4071/.minikube/files for local assets ...
I0203 22:53:17.512431 29404 filesync.go:149] local asset: /home/jenkins/minikube-integration/15770-4071/.minikube/files/etc/ssl/certs/111452.pem -> 111452.pem in /etc/ssl/certs
I0203 22:53:17.512532 29404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0203 22:53:17.521274 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/files/etc/ssl/certs/111452.pem --> /etc/ssl/certs/111452.pem (1708 bytes)
I0203 22:53:17.542936 29404 start.go:303] post-start completed in 133.09901ms
I0203 22:53:17.542956 29404 fix.go:57] fixHost completed within 18.520323655s
I0203 22:53:17.542980 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:53:17.545615 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.545949 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:17.545975 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.546133 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:53:17.546340 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:17.546557 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:17.546736 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:53:17.546897 29404 main.go:141] libmachine: Using SSH client type: native
I0203 22:53:17.547043 29404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 192.168.39.125 22 <nil> <nil>}
I0203 22:53:17.547059 29404 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0203 22:53:17.678438 29404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1675464797.628350477
I0203 22:53:17.678468 29404 fix.go:207] guest clock: 1675464797.628350477
I0203 22:53:17.678476 29404 fix.go:220] Guest: 2023-02-03 22:53:17.628350477 +0000 UTC Remote: 2023-02-03 22:53:17.542961121 +0000 UTC m=+38.999470639 (delta=85.389356ms)
I0203 22:53:17.678496 29404 fix.go:191] guest clock delta is within tolerance: 85.389356ms
I0203 22:53:17.678502 29404 start.go:83] releasing machines lock for "test-preload-617863", held for 18.655883858s
I0203 22:53:17.678526 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:53:17.678796 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetIP
I0203 22:53:17.681315 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.681608 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:17.681631 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.681794 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:53:17.682231 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:53:17.682399 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:53:17.682527 29404 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0203 22:53:17.682566 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:53:17.682587 29404 ssh_runner.go:195] Run: cat /version.json
I0203 22:53:17.682606 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:53:17.685157 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.685461 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:17.685488 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.685507 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.685639 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:53:17.685795 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:17.685883 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:17.685920 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:17.685938 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:53:17.686091 29404 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/machines/test-preload-617863/id_rsa Username:docker}
I0203 22:53:17.686111 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:53:17.686241 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:53:17.686404 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:53:17.686535 29404 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/machines/test-preload-617863/id_rsa Username:docker}
I0203 22:53:17.776046 29404 ssh_runner.go:195] Run: systemctl --version
I0203 22:53:17.935136 29404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0203 22:53:17.940830 29404 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0203 22:53:17.940884 29404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0203 22:53:17.960025 29404 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0203 22:53:17.960050 29404 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0203 22:53:17.960135 29404 ssh_runner.go:195] Run: sudo crictl images --output json
I0203 22:53:21.990070 29404 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.02991443s)
I0203 22:53:21.990197 29404 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0203 22:53:21.990252 29404 ssh_runner.go:195] Run: which lz4
I0203 22:53:21.994327 29404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0203 22:53:21.998793 29404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0203 22:53:21.998810 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0203 22:53:23.805581 29404 containerd.go:551] Took 1.811273 seconds to copy over tarball
I0203 22:53:23.805669 29404 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0203 22:53:26.715432 29404 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.909734044s)
I0203 22:53:26.715460 29404 containerd.go:558] Took 2.909849 seconds to extract the tarball
I0203 22:53:26.715468 29404 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0203 22:53:26.755795 29404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0203 22:53:26.853476 29404 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0203 22:53:26.868890 29404 start.go:483] detecting cgroup driver to use...
I0203 22:53:26.868953 29404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0203 22:53:29.547711 29404 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.678729728s)
I0203 22:53:29.547789 29404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0203 22:53:29.560147 29404 docker.go:186] disabling cri-docker service (if available) ...
I0203 22:53:29.560196 29404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0203 22:53:29.572921 29404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0203 22:53:29.585509 29404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0203 22:53:29.681178 29404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0203 22:53:29.784199 29404 docker.go:202] disabling docker service ...
I0203 22:53:29.784255 29404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0203 22:53:29.796063 29404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0203 22:53:29.808173 29404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0203 22:53:29.917732 29404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0203 22:53:30.018514 29404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0203 22:53:30.031109 29404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0203 22:53:30.047234 29404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
I0203 22:53:30.056841 29404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0203 22:53:30.065684 29404 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0203 22:53:30.065732 29404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0203 22:53:30.074377 29404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0203 22:53:30.083952 29404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0203 22:53:30.093381 29404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0203 22:53:30.102541 29404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0203 22:53:30.112134 29404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0203 22:53:30.121440 29404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0203 22:53:30.129859 29404 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0203 22:53:30.129891 29404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0203 22:53:30.142588 29404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0203 22:53:30.151425 29404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0203 22:53:30.258794 29404 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0203 22:53:30.284221 29404 start.go:530] Will wait 60s for socket path /run/containerd/containerd.sock
I0203 22:53:30.284292 29404 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0203 22:53:30.289512 29404 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0203 22:53:31.394774 29404 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0203 22:53:31.400348 29404 start.go:551] Will wait 60s for crictl version
I0203 22:53:31.400393 29404 ssh_runner.go:195] Run: which crictl
I0203 22:53:31.404567 29404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0203 22:53:31.432298 29404 start.go:567] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.15
RuntimeApiVersion: v1alpha2
I0203 22:53:31.432343 29404 ssh_runner.go:195] Run: containerd --version
I0203 22:53:31.461083 29404 ssh_runner.go:195] Run: containerd --version
I0203 22:53:31.497454 29404 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.15 ...
I0203 22:53:31.498878 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetIP
I0203 22:53:31.501530 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:31.501883 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:53:31.501919 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:53:31.502121 29404 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0203 22:53:31.506039 29404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0203 22:53:31.517455 29404 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0203 22:53:31.517518 29404 ssh_runner.go:195] Run: sudo crictl images --output json
I0203 22:53:31.544700 29404 containerd.go:608] all images are preloaded for containerd runtime.
I0203 22:53:31.544721 29404 containerd.go:522] Images already preloaded, skipping extraction
I0203 22:53:31.544766 29404 ssh_runner.go:195] Run: sudo crictl images --output json
I0203 22:53:31.571638 29404 containerd.go:608] all images are preloaded for containerd runtime.
I0203 22:53:31.571656 29404 cache_images.go:84] Images are preloaded, skipping loading
I0203 22:53:31.571702 29404 ssh_runner.go:195] Run: sudo crictl info
I0203 22:53:31.601004 29404 cni.go:84] Creating CNI manager for ""
I0203 22:53:31.601024 29404 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0203 22:53:31.601064 29404 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0203 22:53:31.601087 29404 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-617863 NodeName:test-preload-617863 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0203 22:53:31.601341 29404 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.125
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-617863"
kubeletExtraArgs:
node-ip: 192.168.39.125
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0203 22:53:31.601480 29404 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-617863 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-617863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0203 22:53:31.601542 29404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0203 22:53:31.610738 29404 binaries.go:44] Found k8s binaries, skipping transfer
I0203 22:53:31.610789 29404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0203 22:53:31.619671 29404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
I0203 22:53:31.634730 29404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0203 22:53:31.649630 29404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
I0203 22:53:31.665192 29404 ssh_runner.go:195] Run: grep 192.168.39.125 control-plane.minikube.internal$ /etc/hosts
I0203 22:53:31.668583 29404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0203 22:53:31.679199 29404 certs.go:56] Setting up /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863 for IP: 192.168.39.125
I0203 22:53:31.679229 29404 certs.go:186] acquiring lock for shared ca certs: {Name:mkab21f13218b102a13d812b44dc15e149fbf879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 22:53:31.679363 29404 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15770-4071/.minikube/ca.key
I0203 22:53:31.679397 29404 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15770-4071/.minikube/proxy-client-ca.key
I0203 22:53:31.679463 29404 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/client.key
I0203 22:53:31.679514 29404 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/apiserver.key.657bd91f
I0203 22:53:31.679545 29404 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/proxy-client.key
I0203 22:53:31.679634 29404 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-4071/.minikube/certs/home/jenkins/minikube-integration/15770-4071/.minikube/certs/11145.pem (1338 bytes)
W0203 22:53:31.679666 29404 certs.go:397] ignoring /home/jenkins/minikube-integration/15770-4071/.minikube/certs/home/jenkins/minikube-integration/15770-4071/.minikube/certs/11145_empty.pem, impossibly tiny 0 bytes
I0203 22:53:31.679681 29404 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-4071/.minikube/certs/home/jenkins/minikube-integration/15770-4071/.minikube/certs/ca-key.pem (1675 bytes)
I0203 22:53:31.679702 29404 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-4071/.minikube/certs/home/jenkins/minikube-integration/15770-4071/.minikube/certs/ca.pem (1078 bytes)
I0203 22:53:31.679728 29404 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-4071/.minikube/certs/home/jenkins/minikube-integration/15770-4071/.minikube/certs/cert.pem (1123 bytes)
I0203 22:53:31.679759 29404 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-4071/.minikube/certs/home/jenkins/minikube-integration/15770-4071/.minikube/certs/key.pem (1679 bytes)
I0203 22:53:31.679811 29404 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-4071/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15770-4071/.minikube/files/etc/ssl/certs/111452.pem (1708 bytes)
I0203 22:53:31.680400 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0203 22:53:31.701348 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0203 22:53:31.722219 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0203 22:53:31.743096 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0203 22:53:31.764848 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0203 22:53:31.785302 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0203 22:53:31.805829 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0203 22:53:31.827125 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0203 22:53:31.849051 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0203 22:53:31.870259 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/certs/11145.pem --> /usr/share/ca-certificates/11145.pem (1338 bytes)
I0203 22:53:31.891052 29404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-4071/.minikube/files/etc/ssl/certs/111452.pem --> /usr/share/ca-certificates/111452.pem (1708 bytes)
I0203 22:53:31.912386 29404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0203 22:53:31.927349 29404 ssh_runner.go:195] Run: openssl version
I0203 22:53:31.932435 29404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0203 22:53:31.942347 29404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0203 22:53:31.946685 29404 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 3 22:10 /usr/share/ca-certificates/minikubeCA.pem
I0203 22:53:31.946714 29404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0203 22:53:31.951908 29404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0203 22:53:31.961782 29404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11145.pem && ln -fs /usr/share/ca-certificates/11145.pem /etc/ssl/certs/11145.pem"
I0203 22:53:31.971723 29404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11145.pem
I0203 22:53:31.976077 29404 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 3 22:15 /usr/share/ca-certificates/11145.pem
I0203 22:53:31.976111 29404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11145.pem
I0203 22:53:31.981189 29404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11145.pem /etc/ssl/certs/51391683.0"
I0203 22:53:31.991021 29404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111452.pem && ln -fs /usr/share/ca-certificates/111452.pem /etc/ssl/certs/111452.pem"
I0203 22:53:32.000992 29404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111452.pem
I0203 22:53:32.005087 29404 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 3 22:15 /usr/share/ca-certificates/111452.pem
I0203 22:53:32.005128 29404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111452.pem
I0203 22:53:32.010205 29404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111452.pem /etc/ssl/certs/3ec20f2e.0"
I0203 22:53:32.020244 29404 kubeadm.go:401] StartCluster: {Name:test-preload-617863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15728/minikube-v1.29.0-1675184411-15728-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-617863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0203 22:53:32.020325 29404 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0203 22:53:32.020373 29404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0203 22:53:32.047753 29404 cri.go:87] found id: ""
I0203 22:53:32.047808 29404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0203 22:53:32.056826 29404 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0203 22:53:32.056840 29404 kubeadm.go:633] restartCluster start
I0203 22:53:32.056869 29404 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0203 22:53:32.065624 29404 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0203 22:53:32.065981 29404 kubeconfig.go:135] verify returned: extract IP: "test-preload-617863" does not appear in /home/jenkins/minikube-integration/15770-4071/kubeconfig
I0203 22:53:32.066063 29404 kubeconfig.go:146] "test-preload-617863" context is missing from /home/jenkins/minikube-integration/15770-4071/kubeconfig - will repair!
I0203 22:53:32.066326 29404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-4071/kubeconfig: {Name:mk088fdf72a35429ad95fbef031f92ed249406bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 22:53:32.067048 29404 kapi.go:59] client config for test-preload-617863: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/client.key", CAFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0203 22:53:32.067681 29404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0203 22:53:32.076206 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:32.076236 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:32.087398 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:32.588106 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:32.588183 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:32.599688 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:33.088312 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:33.088375 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:33.100942 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:33.587515 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:33.587588 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:33.599547 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:34.088325 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:34.088389 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:34.100296 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:34.587826 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:34.587899 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:34.600241 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:35.087776 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:35.087845 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:35.099532 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:35.588146 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:35.588250 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:35.599810 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:36.088309 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:36.088394 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:36.100106 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:36.587662 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:36.587749 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:36.599541 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:37.088052 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:37.088144 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:37.099783 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:37.588416 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:37.588509 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:37.599912 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:38.087577 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:38.087640 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:38.099579 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:38.588328 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:38.588398 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:38.599939 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:39.087655 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:39.087712 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:39.099730 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:39.588413 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:39.588471 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:39.600139 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:40.087661 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:40.087738 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:40.099764 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:40.588444 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:40.588538 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:40.600104 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:41.087638 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:41.087713 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:41.099527 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:41.587963 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:41.588053 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:41.599719 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:42.088406 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:42.088482 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:42.099546 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:42.099563 29404 api_server.go:165] Checking apiserver status ...
I0203 22:53:42.099622 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0203 22:53:42.109798 29404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0203 22:53:42.109814 29404 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0203 22:53:42.109820 29404 kubeadm.go:1120] stopping kube-system containers ...
I0203 22:53:42.109831 29404 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0203 22:53:42.109878 29404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0203 22:53:42.140460 29404 cri.go:87] found id: ""
I0203 22:53:42.140504 29404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0203 22:53:42.155341 29404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0203 22:53:42.164251 29404 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0203 22:53:42.164290 29404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0203 22:53:42.173057 29404 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0203 22:53:42.173089 29404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0203 22:53:42.259879 29404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0203 22:53:42.811095 29404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0203 22:53:43.143345 29404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0203 22:53:43.217693 29404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0203 22:53:43.287116 29404 api_server.go:51] waiting for apiserver process to appear ...
I0203 22:53:43.287168 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0203 22:53:43.804280 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0203 22:53:44.304733 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0203 22:53:44.803979 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0203 22:53:44.815134 29404 api_server.go:71] duration metric: took 1.528025257s to wait for apiserver process to appear ...
I0203 22:53:44.815162 29404 api_server.go:87] waiting for apiserver healthz status ...
I0203 22:53:44.815174 29404 api_server.go:252] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
I0203 22:53:49.816290 29404 api_server.go:268] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0203 22:53:50.317027 29404 api_server.go:252] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
I0203 22:53:55.317540 29404 api_server.go:268] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0203 22:53:55.817199 29404 api_server.go:252] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
I0203 22:54:00.817962 29404 api_server.go:268] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0203 22:54:01.316459 29404 api_server.go:252] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
I0203 22:54:03.328850 29404 api_server.go:278] https://192.168.39.125:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0203 22:54:03.328876 29404 api_server.go:102] status: https://192.168.39.125:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0203 22:54:03.816475 29404 api_server.go:252] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
I0203 22:54:03.822962 29404 api_server.go:278] https://192.168.39.125:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0203 22:54:03.822993 29404 api_server.go:102] status: https://192.168.39.125:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0203 22:54:04.316995 29404 api_server.go:252] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
I0203 22:54:04.332362 29404 api_server.go:278] https://192.168.39.125:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0203 22:54:04.332384 29404 api_server.go:102] status: https://192.168.39.125:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0203 22:54:04.817041 29404 api_server.go:252] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
I0203 22:54:04.824243 29404 api_server.go:278] https://192.168.39.125:8443/healthz returned 200:
ok
I0203 22:54:04.830885 29404 api_server.go:140] control plane version: v1.24.4
I0203 22:54:04.830907 29404 api_server.go:130] duration metric: took 20.01573898s to wait for apiserver health ...
I0203 22:54:04.830915 29404 cni.go:84] Creating CNI manager for ""
I0203 22:54:04.830921 29404 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0203 22:54:04.832872 29404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0203 22:54:04.834323 29404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0203 22:54:04.846123 29404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0203 22:54:04.863656 29404 system_pods.go:43] waiting for kube-system pods to appear ...
I0203 22:54:04.873775 29404 system_pods.go:59] 7 kube-system pods found
I0203 22:54:04.873801 29404 system_pods.go:61] "coredns-6d4b75cb6d-n5ths" [ad243c0f-dab8-4bbd-8686-d18f20c141a4] Running
I0203 22:54:04.873813 29404 system_pods.go:61] "etcd-test-preload-617863" [cf159fb2-d97f-4542-bc3a-c330e11d8045] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0203 22:54:04.873821 29404 system_pods.go:61] "kube-apiserver-test-preload-617863" [7772ee1d-1191-4015-b439-7d3aaef2c09a] Running
I0203 22:54:04.873834 29404 system_pods.go:61] "kube-controller-manager-test-preload-617863" [8fadff8f-084a-4b33-9441-bae64b9d1acf] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0203 22:54:04.873847 29404 system_pods.go:61] "kube-proxy-tbqsp" [fe71f24d-5164-4f13-a81a-16fe06f59191] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0203 22:54:04.873856 29404 system_pods.go:61] "kube-scheduler-test-preload-617863" [9229b826-507b-4c8a-8972-53fbc1c28896] Running
I0203 22:54:04.873867 29404 system_pods.go:61] "storage-provisioner" [7eafb012-4ea5-4377-8604-a32dc2a7b454] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0203 22:54:04.873875 29404 system_pods.go:74] duration metric: took 10.20686ms to wait for pod list to return data ...
I0203 22:54:04.873887 29404 node_conditions.go:102] verifying NodePressure condition ...
I0203 22:54:04.884411 29404 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0203 22:54:04.884441 29404 node_conditions.go:123] node cpu capacity is 2
I0203 22:54:04.884454 29404 node_conditions.go:105] duration metric: took 10.559917ms to run NodePressure ...
I0203 22:54:04.884472 29404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0203 22:54:05.127123 29404 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0203 22:54:05.134262 29404 kubeadm.go:784] kubelet initialised
I0203 22:54:05.134288 29404 kubeadm.go:785] duration metric: took 7.135292ms waiting for restarted kubelet to initialise ...
I0203 22:54:05.134297 29404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0203 22:54:05.139769 29404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace to be "Ready" ...
I0203 22:54:05.146692 29404 pod_ready.go:97] node "test-preload-617863" hosting pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.146716 29404 pod_ready.go:81] duration metric: took 6.923246ms waiting for pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace to be "Ready" ...
E0203 22:54:05.146727 29404 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-617863" hosting pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.146735 29404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:05.152350 29404 pod_ready.go:97] node "test-preload-617863" hosting pod "etcd-test-preload-617863" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.152369 29404 pod_ready.go:81] duration metric: took 5.625356ms waiting for pod "etcd-test-preload-617863" in "kube-system" namespace to be "Ready" ...
E0203 22:54:05.152378 29404 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-617863" hosting pod "etcd-test-preload-617863" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.152391 29404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:05.157928 29404 pod_ready.go:97] node "test-preload-617863" hosting pod "kube-apiserver-test-preload-617863" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.157948 29404 pod_ready.go:81] duration metric: took 5.544297ms waiting for pod "kube-apiserver-test-preload-617863" in "kube-system" namespace to be "Ready" ...
E0203 22:54:05.157957 29404 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-617863" hosting pod "kube-apiserver-test-preload-617863" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.157964 29404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:05.268498 29404 pod_ready.go:97] node "test-preload-617863" hosting pod "kube-controller-manager-test-preload-617863" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.268521 29404 pod_ready.go:81] duration metric: took 110.543935ms waiting for pod "kube-controller-manager-test-preload-617863" in "kube-system" namespace to be "Ready" ...
E0203 22:54:05.268530 29404 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-617863" hosting pod "kube-controller-manager-test-preload-617863" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.268535 29404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tbqsp" in "kube-system" namespace to be "Ready" ...
I0203 22:54:05.669646 29404 pod_ready.go:97] node "test-preload-617863" hosting pod "kube-proxy-tbqsp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.669666 29404 pod_ready.go:81] duration metric: took 401.125283ms waiting for pod "kube-proxy-tbqsp" in "kube-system" namespace to be "Ready" ...
E0203 22:54:05.669674 29404 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-617863" hosting pod "kube-proxy-tbqsp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:05.669680 29404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:06.066974 29404 pod_ready.go:97] node "test-preload-617863" hosting pod "kube-scheduler-test-preload-617863" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:06.066999 29404 pod_ready.go:81] duration metric: took 397.312359ms waiting for pod "kube-scheduler-test-preload-617863" in "kube-system" namespace to be "Ready" ...
E0203 22:54:06.067010 29404 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-617863" hosting pod "kube-scheduler-test-preload-617863" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-617863" has status "Ready":"False"
I0203 22:54:06.067018 29404 pod_ready.go:38] duration metric: took 932.711125ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0203 22:54:06.067035 29404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0203 22:54:06.086125 29404 ops.go:34] apiserver oom_adj: -16
I0203 22:54:06.086144 29404 kubeadm.go:637] restartCluster took 34.029298348s
I0203 22:54:06.086158 29404 kubeadm.go:403] StartCluster complete in 34.065912001s
I0203 22:54:06.086178 29404 settings.go:142] acquiring lock: {Name:mk92d2c5e5307dcf6711bef09eb0539d5634f1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 22:54:06.086254 29404 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15770-4071/kubeconfig
I0203 22:54:06.086905 29404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-4071/kubeconfig: {Name:mk088fdf72a35429ad95fbef031f92ed249406bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 22:54:06.087145 29404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0203 22:54:06.087237 29404 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0203 22:54:06.087343 29404 config.go:180] Loaded profile config "test-preload-617863": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0203 22:54:06.087354 29404 addons.go:65] Setting storage-provisioner=true in profile "test-preload-617863"
I0203 22:54:06.087370 29404 addons.go:227] Setting addon storage-provisioner=true in "test-preload-617863"
W0203 22:54:06.087389 29404 addons.go:236] addon storage-provisioner should already be in state true
I0203 22:54:06.087401 29404 addons.go:65] Setting default-storageclass=true in profile "test-preload-617863"
I0203 22:54:06.087431 29404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-617863"
I0203 22:54:06.087442 29404 host.go:66] Checking if "test-preload-617863" exists ...
I0203 22:54:06.087683 29404 kapi.go:59] client config for test-preload-617863: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/client.key", CAFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0203 22:54:06.087846 29404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0203 22:54:06.087889 29404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 22:54:06.087931 29404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0203 22:54:06.087981 29404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 22:54:06.091348 29404 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-617863" context rescaled to 1 replicas
I0203 22:54:06.091385 29404 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0203 22:54:06.094106 29404 out.go:177] * Verifying Kubernetes components...
I0203 22:54:06.095665 29404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0203 22:54:06.103776 29404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
I0203 22:54:06.103832 29404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36085
I0203 22:54:06.104212 29404 main.go:141] libmachine: () Calling .GetVersion
I0203 22:54:06.104259 29404 main.go:141] libmachine: () Calling .GetVersion
I0203 22:54:06.104709 29404 main.go:141] libmachine: Using API Version 1
I0203 22:54:06.104733 29404 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 22:54:06.104707 29404 main.go:141] libmachine: Using API Version 1
I0203 22:54:06.104769 29404 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 22:54:06.105043 29404 main.go:141] libmachine: () Calling .GetMachineName
I0203 22:54:06.105125 29404 main.go:141] libmachine: () Calling .GetMachineName
I0203 22:54:06.105276 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetState
I0203 22:54:06.105487 29404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0203 22:54:06.105523 29404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 22:54:06.107714 29404 kapi.go:59] client config for test-preload-617863: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/profiles/test-preload-617863/client.key", CAFile:"/home/jenkins/minikube-integration/15770-4071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0203 22:54:06.116094 29404 addons.go:227] Setting addon default-storageclass=true in "test-preload-617863"
W0203 22:54:06.116111 29404 addons.go:236] addon default-storageclass should already be in state true
I0203 22:54:06.116133 29404 host.go:66] Checking if "test-preload-617863" exists ...
I0203 22:54:06.116423 29404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0203 22:54:06.116481 29404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 22:54:06.120777 29404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33967
I0203 22:54:06.121139 29404 main.go:141] libmachine: () Calling .GetVersion
I0203 22:54:06.121640 29404 main.go:141] libmachine: Using API Version 1
I0203 22:54:06.121665 29404 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 22:54:06.122019 29404 main.go:141] libmachine: () Calling .GetMachineName
I0203 22:54:06.122212 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetState
I0203 22:54:06.123668 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:54:06.125419 29404 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0203 22:54:06.126957 29404 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0203 22:54:06.126977 29404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0203 22:54:06.126993 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:54:06.130151 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:54:06.130648 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:54:06.130671 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:54:06.130823 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:54:06.130970 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:54:06.131079 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:54:06.131264 29404 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/machines/test-preload-617863/id_rsa Username:docker}
I0203 22:54:06.133703 29404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
I0203 22:54:06.134012 29404 main.go:141] libmachine: () Calling .GetVersion
I0203 22:54:06.134443 29404 main.go:141] libmachine: Using API Version 1
I0203 22:54:06.134478 29404 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 22:54:06.134760 29404 main.go:141] libmachine: () Calling .GetMachineName
I0203 22:54:06.135342 29404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0203 22:54:06.135391 29404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0203 22:54:06.150034 29404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
I0203 22:54:06.150416 29404 main.go:141] libmachine: () Calling .GetVersion
I0203 22:54:06.150833 29404 main.go:141] libmachine: Using API Version 1
I0203 22:54:06.150858 29404 main.go:141] libmachine: () Calling .SetConfigRaw
I0203 22:54:06.151204 29404 main.go:141] libmachine: () Calling .GetMachineName
I0203 22:54:06.151415 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetState
I0203 22:54:06.152931 29404 main.go:141] libmachine: (test-preload-617863) Calling .DriverName
I0203 22:54:06.153180 29404 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0203 22:54:06.153199 29404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0203 22:54:06.153216 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHHostname
I0203 22:54:06.155626 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:54:06.155941 29404 main.go:141] libmachine: (test-preload-617863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:a8:f9", ip: ""} in network mk-test-preload-617863: {Iface:virbr1 ExpiryTime:2023-02-03 23:49:19 +0000 UTC Type:0 Mac:52:54:00:50:a8:f9 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-617863 Clientid:01:52:54:00:50:a8:f9}
I0203 22:54:06.155968 29404 main.go:141] libmachine: (test-preload-617863) DBG | domain test-preload-617863 has defined IP address 192.168.39.125 and MAC address 52:54:00:50:a8:f9 in network mk-test-preload-617863
I0203 22:54:06.156052 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHPort
I0203 22:54:06.156193 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHKeyPath
I0203 22:54:06.156307 29404 main.go:141] libmachine: (test-preload-617863) Calling .GetSSHUsername
I0203 22:54:06.156426 29404 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15770-4071/.minikube/machines/test-preload-617863/id_rsa Username:docker}
I0203 22:54:06.275712 29404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0203 22:54:06.279312 29404 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0203 22:54:06.279315 29404 node_ready.go:35] waiting up to 6m0s for node "test-preload-617863" to be "Ready" ...
I0203 22:54:06.292683 29404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0203 22:54:07.186234 29404 main.go:141] libmachine: Making call to close driver server
I0203 22:54:07.186263 29404 main.go:141] libmachine: (test-preload-617863) Calling .Close
I0203 22:54:07.186290 29404 main.go:141] libmachine: Making call to close driver server
I0203 22:54:07.186300 29404 main.go:141] libmachine: (test-preload-617863) Calling .Close
I0203 22:54:07.186515 29404 main.go:141] libmachine: Successfully made call to close driver server
I0203 22:54:07.186536 29404 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 22:54:07.186546 29404 main.go:141] libmachine: Making call to close driver server
I0203 22:54:07.186562 29404 main.go:141] libmachine: (test-preload-617863) Calling .Close
I0203 22:54:07.186587 29404 main.go:141] libmachine: Successfully made call to close driver server
I0203 22:54:07.186602 29404 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 22:54:07.186609 29404 main.go:141] libmachine: Making call to close driver server
I0203 22:54:07.186615 29404 main.go:141] libmachine: (test-preload-617863) Calling .Close
I0203 22:54:07.186747 29404 main.go:141] libmachine: Successfully made call to close driver server
I0203 22:54:07.186757 29404 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 22:54:07.186770 29404 main.go:141] libmachine: Making call to close driver server
I0203 22:54:07.186778 29404 main.go:141] libmachine: (test-preload-617863) Calling .Close
I0203 22:54:07.188281 29404 main.go:141] libmachine: (test-preload-617863) DBG | Closing plugin on server side
I0203 22:54:07.188282 29404 main.go:141] libmachine: Successfully made call to close driver server
I0203 22:54:07.188300 29404 main.go:141] libmachine: Successfully made call to close driver server
I0203 22:54:07.188307 29404 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 22:54:07.188312 29404 main.go:141] libmachine: Making call to close connection to plugin binary
I0203 22:54:07.188425 29404 main.go:141] libmachine: (test-preload-617863) DBG | Closing plugin on server side
I0203 22:54:07.191405 29404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0203 22:54:07.192709 29404 addons.go:492] enable addons completed in 1.105484491s: enabled=[storage-provisioner default-storageclass]
I0203 22:54:08.285938 29404 node_ready.go:58] node "test-preload-617863" has status "Ready":"False"
I0203 22:54:10.785796 29404 node_ready.go:58] node "test-preload-617863" has status "Ready":"False"
I0203 22:54:13.285468 29404 node_ready.go:58] node "test-preload-617863" has status "Ready":"False"
I0203 22:54:13.784783 29404 node_ready.go:49] node "test-preload-617863" has status "Ready":"True"
I0203 22:54:13.784811 29404 node_ready.go:38] duration metric: took 7.505474177s waiting for node "test-preload-617863" to be "Ready" ...
I0203 22:54:13.784821 29404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0203 22:54:13.789411 29404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace to be "Ready" ...
I0203 22:54:15.801497 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:18.299104 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:20.299129 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:22.300742 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:24.799455 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:26.800459 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:29.300123 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:31.802770 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:34.300321 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:36.803339 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:39.300832 29404 pod_ready.go:102] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"False"
I0203 22:54:41.799369 29404 pod_ready.go:92] pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace has status "Ready":"True"
I0203 22:54:41.799393 29404 pod_ready.go:81] duration metric: took 28.00996409s waiting for pod "coredns-6d4b75cb6d-n5ths" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.799400 29404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.803687 29404 pod_ready.go:92] pod "etcd-test-preload-617863" in "kube-system" namespace has status "Ready":"True"
I0203 22:54:41.803702 29404 pod_ready.go:81] duration metric: took 4.296734ms waiting for pod "etcd-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.803709 29404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.808393 29404 pod_ready.go:92] pod "kube-apiserver-test-preload-617863" in "kube-system" namespace has status "Ready":"True"
I0203 22:54:41.808412 29404 pod_ready.go:81] duration metric: took 4.69664ms waiting for pod "kube-apiserver-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.808422 29404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.812940 29404 pod_ready.go:92] pod "kube-controller-manager-test-preload-617863" in "kube-system" namespace has status "Ready":"True"
I0203 22:54:41.812956 29404 pod_ready.go:81] duration metric: took 4.526325ms waiting for pod "kube-controller-manager-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.812965 29404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbqsp" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.816684 29404 pod_ready.go:92] pod "kube-proxy-tbqsp" in "kube-system" namespace has status "Ready":"True"
I0203 22:54:41.816696 29404 pod_ready.go:81] duration metric: took 3.724889ms waiting for pod "kube-proxy-tbqsp" in "kube-system" namespace to be "Ready" ...
I0203 22:54:41.816702 29404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:42.197575 29404 pod_ready.go:92] pod "kube-scheduler-test-preload-617863" in "kube-system" namespace has status "Ready":"True"
I0203 22:54:42.197594 29404 pod_ready.go:81] duration metric: took 380.886442ms waiting for pod "kube-scheduler-test-preload-617863" in "kube-system" namespace to be "Ready" ...
I0203 22:54:42.197603 29404 pod_ready.go:38] duration metric: took 28.412772463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0203 22:54:42.197620 29404 api_server.go:51] waiting for apiserver process to appear ...
I0203 22:54:42.197658 29404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0203 22:54:42.211475 29404 api_server.go:71] duration metric: took 36.120063248s to wait for apiserver process to appear ...
I0203 22:54:42.211490 29404 api_server.go:87] waiting for apiserver healthz status ...
I0203 22:54:42.211498 29404 api_server.go:252] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
I0203 22:54:42.216470 29404 api_server.go:278] https://192.168.39.125:8443/healthz returned 200:
ok
I0203 22:54:42.217459 29404 api_server.go:140] control plane version: v1.24.4
I0203 22:54:42.217479 29404 api_server.go:130] duration metric: took 5.983864ms to wait for apiserver health ...
I0203 22:54:42.217488 29404 system_pods.go:43] waiting for kube-system pods to appear ...
I0203 22:54:42.399523 29404 system_pods.go:59] 7 kube-system pods found
I0203 22:54:42.399546 29404 system_pods.go:61] "coredns-6d4b75cb6d-n5ths" [ad243c0f-dab8-4bbd-8686-d18f20c141a4] Running
I0203 22:54:42.399550 29404 system_pods.go:61] "etcd-test-preload-617863" [cf159fb2-d97f-4542-bc3a-c330e11d8045] Running
I0203 22:54:42.399555 29404 system_pods.go:61] "kube-apiserver-test-preload-617863" [7772ee1d-1191-4015-b439-7d3aaef2c09a] Running
I0203 22:54:42.399559 29404 system_pods.go:61] "kube-controller-manager-test-preload-617863" [8fadff8f-084a-4b33-9441-bae64b9d1acf] Running
I0203 22:54:42.399563 29404 system_pods.go:61] "kube-proxy-tbqsp" [fe71f24d-5164-4f13-a81a-16fe06f59191] Running
I0203 22:54:42.399567 29404 system_pods.go:61] "kube-scheduler-test-preload-617863" [9229b826-507b-4c8a-8972-53fbc1c28896] Running
I0203 22:54:42.399573 29404 system_pods.go:61] "storage-provisioner" [7eafb012-4ea5-4377-8604-a32dc2a7b454] Running
I0203 22:54:42.399579 29404 system_pods.go:74] duration metric: took 182.085801ms to wait for pod list to return data ...
I0203 22:54:42.399589 29404 default_sa.go:34] waiting for default service account to be created ...
I0203 22:54:42.596385 29404 default_sa.go:45] found service account: "default"
I0203 22:54:42.596405 29404 default_sa.go:55] duration metric: took 196.811569ms for default service account to be created ...
I0203 22:54:42.596412 29404 system_pods.go:116] waiting for k8s-apps to be running ...
I0203 22:54:42.799485 29404 system_pods.go:86] 7 kube-system pods found
I0203 22:54:42.799509 29404 system_pods.go:89] "coredns-6d4b75cb6d-n5ths" [ad243c0f-dab8-4bbd-8686-d18f20c141a4] Running
I0203 22:54:42.799514 29404 system_pods.go:89] "etcd-test-preload-617863" [cf159fb2-d97f-4542-bc3a-c330e11d8045] Running
I0203 22:54:42.799522 29404 system_pods.go:89] "kube-apiserver-test-preload-617863" [7772ee1d-1191-4015-b439-7d3aaef2c09a] Running
I0203 22:54:42.799527 29404 system_pods.go:89] "kube-controller-manager-test-preload-617863" [8fadff8f-084a-4b33-9441-bae64b9d1acf] Running
I0203 22:54:42.799530 29404 system_pods.go:89] "kube-proxy-tbqsp" [fe71f24d-5164-4f13-a81a-16fe06f59191] Running
I0203 22:54:42.799534 29404 system_pods.go:89] "kube-scheduler-test-preload-617863" [9229b826-507b-4c8a-8972-53fbc1c28896] Running
I0203 22:54:42.799538 29404 system_pods.go:89] "storage-provisioner" [7eafb012-4ea5-4377-8604-a32dc2a7b454] Running
I0203 22:54:42.799544 29404 system_pods.go:126] duration metric: took 203.128643ms to wait for k8s-apps to be running ...
I0203 22:54:42.799553 29404 system_svc.go:44] waiting for kubelet service to be running ....
I0203 22:54:42.799588 29404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0203 22:54:42.813129 29404 system_svc.go:56] duration metric: took 13.568331ms WaitForService to wait for kubelet.
I0203 22:54:42.813150 29404 kubeadm.go:578] duration metric: took 36.721741162s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0203 22:54:42.813165 29404 node_conditions.go:102] verifying NodePressure condition ...
I0203 22:54:42.996849 29404 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0203 22:54:42.996872 29404 node_conditions.go:123] node cpu capacity is 2
I0203 22:54:42.996882 29404 node_conditions.go:105] duration metric: took 183.712726ms to run NodePressure ...
I0203 22:54:42.996891 29404 start.go:228] waiting for startup goroutines ...
I0203 22:54:42.996897 29404 start.go:233] waiting for cluster config update ...
I0203 22:54:42.996907 29404 start.go:240] writing updated cluster config ...
I0203 22:54:42.997143 29404 ssh_runner.go:195] Run: rm -f paused
I0203 22:54:43.043664 29404 start.go:555] kubectl: 1.26.1, cluster: 1.24.4 (minor skew: 2)
I0203 22:54:43.045814 29404 out.go:177]
W0203 22:54:43.047280 29404 out.go:239] ! /usr/local/bin/kubectl is version 1.26.1, which may have incompatibilities with Kubernetes 1.24.4.
I0203 22:54:43.048700 29404 out.go:177] - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
I0203 22:54:43.050029 29404 out.go:177] * Done! kubectl is now configured to use "test-preload-617863" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
00ed7a141e4c6 a4ca41631cc7a 3 seconds ago Running coredns 1 9049619bedcc1
9838f214811ab 6e38f40d628db 26 seconds ago Running storage-provisioner 1 7c905979b9a3b
f74bc18e5ecc6 03fa22539fc1c 27 seconds ago Running kube-scheduler 1 9d5c14c0479df
3808302ddf5a0 7a53d1e08ef58 38 seconds ago Running kube-proxy 1 bc2bc912ef576
356a030d87704 1f99cb6da9a82 39 seconds ago Running kube-controller-manager 1 7adb19c3f5d46
8b9b4b2092e49 aebe758cef4cd 45 seconds ago Running etcd 1 9afb071986644
547c94e52b37d 6cab9d1bed1be 59 seconds ago Running kube-apiserver 1 0701038d6bba8
*
* ==> containerd <==
* -- Journal begins at Fri 2023-02-03 22:53:09 UTC, ends at Fri 2023-02-03 22:54:44 UTC. --
Feb 03 22:54:16 test-preload-617863 containerd[631]: time="2023-02-03T22:54:16.459837118Z" level=info msg="CreateContainer within sandbox \"9d5c14c0479df70803b0b8acd900b07b0e06e2dc5a3c267ee139bf2a80941829\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Feb 03 22:54:16 test-preload-617863 containerd[631]: time="2023-02-03T22:54:16.487624191Z" level=info msg="CreateContainer within sandbox \"9d5c14c0479df70803b0b8acd900b07b0e06e2dc5a3c267ee139bf2a80941829\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f74bc18e5ecc603e73747f85b345e10610f25760b9d24812a9d4b4ec03913f51\""
Feb 03 22:54:16 test-preload-617863 containerd[631]: time="2023-02-03T22:54:16.488345158Z" level=info msg="StartContainer for \"f74bc18e5ecc603e73747f85b345e10610f25760b9d24812a9d4b4ec03913f51\""
Feb 03 22:54:16 test-preload-617863 containerd[631]: time="2023-02-03T22:54:16.893589304Z" level=info msg="StartContainer for \"f74bc18e5ecc603e73747f85b345e10610f25760b9d24812a9d4b4ec03913f51\" returns successfully"
Feb 03 22:54:16 test-preload-617863 containerd[631]: time="2023-02-03T22:54:16.978516480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:7eafb012-4ea5-4377-8604-a32dc2a7b454,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c905979b9a3b1df1c971b81f66e22b1906875019b08b93c8e7502128620e7ce\""
Feb 03 22:54:16 test-preload-617863 containerd[631]: time="2023-02-03T22:54:16.983339544Z" level=info msg="CreateContainer within sandbox \"7c905979b9a3b1df1c971b81f66e22b1906875019b08b93c8e7502128620e7ce\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
Feb 03 22:54:16 test-preload-617863 containerd[631]: time="2023-02-03T22:54:16.997619242Z" level=error msg="CreateContainer within sandbox \"7c905979b9a3b1df1c971b81f66e22b1906875019b08b93c8e7502128620e7ce\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1249965317 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists"
Feb 03 22:54:17 test-preload-617863 containerd[631]: time="2023-02-03T22:54:17.461824909Z" level=info msg="CreateContainer within sandbox \"7c905979b9a3b1df1c971b81f66e22b1906875019b08b93c8e7502128620e7ce\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
Feb 03 22:54:17 test-preload-617863 containerd[631]: time="2023-02-03T22:54:17.493655178Z" level=info msg="CreateContainer within sandbox \"7c905979b9a3b1df1c971b81f66e22b1906875019b08b93c8e7502128620e7ce\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"9838f214811abab0390357c02d1346d4a5735a9c2b0591f56bf609b5d73c4d8c\""
Feb 03 22:54:17 test-preload-617863 containerd[631]: time="2023-02-03T22:54:17.494550043Z" level=info msg="StartContainer for \"9838f214811abab0390357c02d1346d4a5735a9c2b0591f56bf609b5d73c4d8c\""
Feb 03 22:54:17 test-preload-617863 containerd[631]: time="2023-02-03T22:54:17.636257529Z" level=info msg="StartContainer for \"9838f214811abab0390357c02d1346d4a5735a9c2b0591f56bf609b5d73c4d8c\" returns successfully"
Feb 03 22:54:25 test-preload-617863 containerd[631]: time="2023-02-03T22:54:25.354272978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-n5ths,Uid:ad243c0f-dab8-4bbd-8686-d18f20c141a4,Namespace:kube-system,Attempt:0,}"
Feb 03 22:54:25 test-preload-617863 containerd[631]: time="2023-02-03T22:54:25.362191209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-n5ths,Uid:ad243c0f-dab8-4bbd-8686-d18f20c141a4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1263383645 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists"
Feb 03 22:54:39 test-preload-617863 containerd[631]: time="2023-02-03T22:54:39.355201210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-n5ths,Uid:ad243c0f-dab8-4bbd-8686-d18f20c141a4,Namespace:kube-system,Attempt:0,}"
Feb 03 22:54:39 test-preload-617863 containerd[631]: time="2023-02-03T22:54:39.442221584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 03 22:54:39 test-preload-617863 containerd[631]: time="2023-02-03T22:54:39.442325623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 03 22:54:39 test-preload-617863 containerd[631]: time="2023-02-03T22:54:39.442337450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 03 22:54:39 test-preload-617863 containerd[631]: time="2023-02-03T22:54:39.442517008Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9049619bedcc1d43a1cf28f78df3b5bf1e56c5266141a3df743792322174546d pid=1563 runtime=io.containerd.runc.v2
Feb 03 22:54:39 test-preload-617863 containerd[631]: time="2023-02-03T22:54:39.751063034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-n5ths,Uid:ad243c0f-dab8-4bbd-8686-d18f20c141a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9049619bedcc1d43a1cf28f78df3b5bf1e56c5266141a3df743792322174546d\""
Feb 03 22:54:39 test-preload-617863 containerd[631]: time="2023-02-03T22:54:39.755089257Z" level=info msg="CreateContainer within sandbox \"9049619bedcc1d43a1cf28f78df3b5bf1e56c5266141a3df743792322174546d\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Feb 03 22:54:39 test-preload-617863 containerd[631]: time="2023-02-03T22:54:39.768528443Z" level=error msg="CreateContainer within sandbox \"9049619bedcc1d43a1cf28f78df3b5bf1e56c5266141a3df743792322174546d\" for &ContainerMetadata{Name:coredns,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3871188931 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists"
Feb 03 22:54:40 test-preload-617863 containerd[631]: time="2023-02-03T22:54:40.522360061Z" level=info msg="CreateContainer within sandbox \"9049619bedcc1d43a1cf28f78df3b5bf1e56c5266141a3df743792322174546d\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Feb 03 22:54:40 test-preload-617863 containerd[631]: time="2023-02-03T22:54:40.565125624Z" level=info msg="CreateContainer within sandbox \"9049619bedcc1d43a1cf28f78df3b5bf1e56c5266141a3df743792322174546d\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"00ed7a141e4c688943b93f96bf9d850f9b12369c0030496fc80b063ed3e74cae\""
Feb 03 22:54:40 test-preload-617863 containerd[631]: time="2023-02-03T22:54:40.565972723Z" level=info msg="StartContainer for \"00ed7a141e4c688943b93f96bf9d850f9b12369c0030496fc80b063ed3e74cae\""
Feb 03 22:54:40 test-preload-617863 containerd[631]: time="2023-02-03T22:54:40.655159140Z" level=info msg="StartContainer for \"00ed7a141e4c688943b93f96bf9d850f9b12369c0030496fc80b063ed3e74cae\" returns successfully"
*
* ==> coredns [00ed7a141e4c688943b93f96bf9d850f9b12369c0030496fc80b063ed3e74cae] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] 127.0.0.1:42687 - 60498 "HINFO IN 6562581593131605249.9048304665640926925. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.061033309s
*
* ==> describe nodes <==
* Name: test-preload-617863
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-617863
kubernetes.io/os=linux
minikube.k8s.io/commit=b839c677c13f941c936975b72b386dd12a345761
minikube.k8s.io/name=test-preload-617863
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_02_03T22_50_10_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 03 Feb 2023 22:50:07 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-617863
AcquireTime: <unset>
RenewTime: Fri, 03 Feb 2023 22:54:34 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 03 Feb 2023 22:54:13 +0000 Fri, 03 Feb 2023 22:50:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 03 Feb 2023 22:54:13 +0000 Fri, 03 Feb 2023 22:50:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 03 Feb 2023 22:54:13 +0000 Fri, 03 Feb 2023 22:50:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 03 Feb 2023 22:54:13 +0000 Fri, 03 Feb 2023 22:54:13 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.125
Hostname: test-preload-617863
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 8c6585c6fb5c4150964a6d4da7deee27
System UUID: 8c6585c6-fb5c-4150-964a-6d4da7deee27
Boot ID: 20063e84-4df1-424f-8c3a-ccda61684ebe
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.15
Kubelet Version: v1.24.4
Kube-Proxy Version: v1.24.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-n5ths 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 4m21s
kube-system etcd-test-preload-617863 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 4m34s
kube-system kube-apiserver-test-preload-617863 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m34s
kube-system kube-controller-manager-test-preload-617863 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m34s
kube-system kube-proxy-tbqsp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m21s
kube-system kube-scheduler-test-preload-617863 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m34s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m19s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 38s kube-proxy
Normal Starting 4m18s kube-proxy
Normal Starting 4m34s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m34s kubelet Node test-preload-617863 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m34s kubelet Node test-preload-617863 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m34s kubelet Node test-preload-617863 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m34s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m24s kubelet Node test-preload-617863 status is now: NodeReady
Normal RegisteredNode 4m22s node-controller Node test-preload-617863 event: Registered Node test-preload-617863 in Controller
Normal Starting 61s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 61s (x8 over 61s) kubelet Node test-preload-617863 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 61s (x8 over 61s) kubelet Node test-preload-617863 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 61s (x7 over 61s) kubelet Node test-preload-617863 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 61s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 28s node-controller Node test-preload-617863 event: Registered Node test-preload-617863 in Controller
*
* ==> dmesg <==
* [Feb 3 22:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.070161] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.904201] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.113360] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.149204] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.451041] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +14.709561] systemd-fstab-generator[530]: Ignoring "noauto" for root device
[ +2.829873] systemd-fstab-generator[560]: Ignoring "noauto" for root device
[ +0.102469] systemd-fstab-generator[571]: Ignoring "noauto" for root device
[ +0.129910] systemd-fstab-generator[584]: Ignoring "noauto" for root device
[ +0.105142] systemd-fstab-generator[595]: Ignoring "noauto" for root device
[ +0.238954] systemd-fstab-generator[622]: Ignoring "noauto" for root device
[ +12.869670] systemd-fstab-generator[817]: Ignoring "noauto" for root device
[Feb 3 22:54] kauditd_printk_skb: 7 callbacks suppressed
[ +10.702877] kauditd_printk_skb: 8 callbacks suppressed
*
* ==> etcd [8b9b4b2092e496a3621c7cb6bc4791930e95bc8e66df8cf07ef7cc8e9fc0d17c] <==
* {"level":"info","ts":"2023-02-03T22:53:59.081Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f4d3edba9e42b28c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-02-03T22:53:59.081Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-02-03T22:53:59.082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c switched to configuration voters=(17641705551115235980)"}
{"level":"info","ts":"2023-02-03T22:53:59.083Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","added-peer-id":"f4d3edba9e42b28c","added-peer-peer-urls":["https://192.168.39.125:2380"]}
{"level":"info","ts":"2023-02-03T22:53:59.083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-03T22:53:59.083Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-03T22:53:59.084Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-03T22:53:59.084Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f4d3edba9e42b28c","initial-advertise-peer-urls":["https://192.168.39.125:2380"],"listen-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-02-03T22:53:59.084Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-02-03T22:53:59.085Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.125:2380"}
{"level":"info","ts":"2023-02-03T22:53:59.085Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.125:2380"}
{"level":"info","ts":"2023-02-03T22:54:00.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c is starting a new election at term 2"}
{"level":"info","ts":"2023-02-03T22:54:00.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 2"}
{"level":"info","ts":"2023-02-03T22:54:00.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 2"}
{"level":"info","ts":"2023-02-03T22:54:00.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 3"}
{"level":"info","ts":"2023-02-03T22:54:00.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 3"}
{"level":"info","ts":"2023-02-03T22:54:00.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 3"}
{"level":"info","ts":"2023-02-03T22:54:00.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 3"}
{"level":"info","ts":"2023-02-03T22:54:00.568Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:test-preload-617863 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-03T22:54:00.569Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-03T22:54:00.569Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-03T22:54:00.570Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.125:2379"}
{"level":"info","ts":"2023-02-03T22:54:00.570Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-03T22:54:00.570Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-02-03T22:54:00.571Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 22:54:44 up 1 min, 0 users, load average: 0.27, 0.12, 0.04
Linux test-preload-617863 5.10.57 #1 SMP Tue Jan 31 20:34:13 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [547c94e52b37d6377393bf6031671b90574e008ece36cbb957460404f47fa6fb] <==
* I0203 22:54:03.260451 1 naming_controller.go:291] Starting NamingConditionController
I0203 22:54:03.260546 1 establishing_controller.go:76] Starting EstablishingController
I0203 22:54:03.260649 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0203 22:54:03.260765 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0203 22:54:03.260863 1 crd_finalizer.go:266] Starting CRDFinalizer
I0203 22:54:03.260977 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0203 22:54:03.276853 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0203 22:54:03.406304 1 shared_informer.go:262] Caches are synced for node_authorizer
I0203 22:54:03.423386 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0203 22:54:03.424780 1 cache.go:39] Caches are synced for autoregister controller
I0203 22:54:03.425378 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0203 22:54:03.429902 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0203 22:54:03.452596 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0203 22:54:03.453731 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0203 22:54:03.454346 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0203 22:54:03.911241 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0203 22:54:04.228679 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0203 22:54:04.987872 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0203 22:54:05.004322 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0203 22:54:05.064260 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0203 22:54:05.104222 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0203 22:54:05.115124 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0203 22:54:05.752588 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0203 22:54:16.338206 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0203 22:54:16.412921 1 controller.go:611] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [356a030d87704864e2b2a08235cacde35ce1424634f43f6866791b22efbb5b53] <==
* I0203 22:54:16.279931 1 shared_informer.go:262] Caches are synced for ReplicationController
I0203 22:54:16.284707 1 shared_informer.go:262] Caches are synced for job
I0203 22:54:16.285802 1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0203 22:54:16.291520 1 shared_informer.go:262] Caches are synced for expand
I0203 22:54:16.294167 1 shared_informer.go:262] Caches are synced for TTL
I0203 22:54:16.294809 1 shared_informer.go:262] Caches are synced for daemon sets
I0203 22:54:16.298146 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0203 22:54:16.302664 1 shared_informer.go:262] Caches are synced for crt configmap
I0203 22:54:16.302755 1 shared_informer.go:262] Caches are synced for HPA
I0203 22:54:16.310753 1 shared_informer.go:262] Caches are synced for TTL after finished
I0203 22:54:16.318089 1 shared_informer.go:262] Caches are synced for PV protection
I0203 22:54:16.322815 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0203 22:54:16.323724 1 shared_informer.go:262] Caches are synced for deployment
I0203 22:54:16.327324 1 shared_informer.go:262] Caches are synced for cronjob
I0203 22:54:16.332111 1 shared_informer.go:262] Caches are synced for persistent volume
I0203 22:54:16.389415 1 shared_informer.go:262] Caches are synced for endpoint
I0203 22:54:16.426395 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0203 22:54:16.461262 1 shared_informer.go:262] Caches are synced for resource quota
I0203 22:54:16.481902 1 shared_informer.go:262] Caches are synced for stateful set
I0203 22:54:16.491109 1 shared_informer.go:262] Caches are synced for disruption
I0203 22:54:16.491122 1 disruption.go:371] Sending events to api server.
I0203 22:54:16.504431 1 shared_informer.go:262] Caches are synced for resource quota
I0203 22:54:16.919578 1 shared_informer.go:262] Caches are synced for garbage collector
I0203 22:54:16.919832 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0203 22:54:16.953270 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-proxy [3808302ddf5a07f7c1b72add04e3e275da4abf4d68d1a9f06214d6857d76c59f] <==
* I0203 22:54:05.680345 1 node.go:163] Successfully retrieved node IP: 192.168.39.125
I0203 22:54:05.680692 1 server_others.go:138] "Detected node IP" address="192.168.39.125"
I0203 22:54:05.680866 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0203 22:54:05.740150 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0203 22:54:05.740377 1 server_others.go:206] "Using iptables Proxier"
I0203 22:54:05.740554 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0203 22:54:05.740914 1 server.go:661] "Version info" version="v1.24.4"
I0203 22:54:05.741207 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0203 22:54:05.741877 1 config.go:317] "Starting service config controller"
I0203 22:54:05.742132 1 shared_informer.go:255] Waiting for caches to sync for service config
I0203 22:54:05.742291 1 config.go:226] "Starting endpoint slice config controller"
I0203 22:54:05.742445 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0203 22:54:05.746464 1 config.go:444] "Starting node config controller"
I0203 22:54:05.746642 1 shared_informer.go:255] Waiting for caches to sync for node config
I0203 22:54:05.842452 1 shared_informer.go:262] Caches are synced for service config
I0203 22:54:05.842663 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0203 22:54:05.847698 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [f74bc18e5ecc603e73747f85b345e10610f25760b9d24812a9d4b4ec03913f51] <==
* I0203 22:54:17.203987 1 serving.go:348] Generated self-signed cert in-memory
I0203 22:54:17.568603 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
I0203 22:54:17.568648 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0203 22:54:17.573226 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0203 22:54:17.576530 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0203 22:54:17.576576 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0203 22:54:17.576598 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0203 22:54:17.576831 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0203 22:54:17.576838 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0203 22:54:17.576853 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0203 22:54:17.576857 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0203 22:54:17.677678 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0203 22:54:17.677826 1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0203 22:54:17.677981 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Fri 2023-02-03 22:53:09 UTC, ends at Fri 2023-02-03 22:54:44 UTC. --
Feb 03 22:54:05 test-preload-617863 kubelet[823]: E0203 22:54:05.335770 823 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-156417203 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/27: file exists" podSandboxID="bc2bc912ef576b5938183897c088ffc9dd1c3028daadf233f3d3d97efe5d7fe0"
Feb 03 22:54:05 test-preload-617863 kubelet[823]: E0203 22:54:05.335865 823 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-a
ccess-8lm4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-tbqsp_kube-system(fe71f24d-5164-4f13-a81a-16fe06f59191): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-156417203 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/27: file exists
Feb 03 22:54:05 test-preload-617863 kubelet[823]: E0203 22:54:05.335895 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-156417203 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/27: file exists\"" pod="kube-system/kube-proxy-tbqsp" podUID=fe71f24d-5164-4f13-a81a-16fe06f59191
Feb 03 22:54:06 test-preload-617863 kubelet[823]: E0203 22:54:06.048282 823 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Feb 03 22:54:06 test-preload-617863 kubelet[823]: E0203 22:54:06.048368 823 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ad243c0f-dab8-4bbd-8686-d18f20c141a4-config-volume podName:ad243c0f-dab8-4bbd-8686-d18f20c141a4 nodeName:}" failed. No retries permitted until 2023-02-03 22:54:08.048354155 +0000 UTC m=+24.943783968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ad243c0f-dab8-4bbd-8686-d18f20c141a4-config-volume") pod "coredns-6d4b75cb6d-n5ths" (UID: "ad243c0f-dab8-4bbd-8686-d18f20c141a4") : object "kube-system"/"coredns" not registered
Feb 03 22:54:06 test-preload-617863 kubelet[823]: E0203 22:54:06.353718 823 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-n5ths" podUID=ad243c0f-dab8-4bbd-8686-d18f20c141a4
Feb 03 22:54:08 test-preload-617863 kubelet[823]: E0203 22:54:08.062345 823 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Feb 03 22:54:08 test-preload-617863 kubelet[823]: E0203 22:54:08.063100 823 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ad243c0f-dab8-4bbd-8686-d18f20c141a4-config-volume podName:ad243c0f-dab8-4bbd-8686-d18f20c141a4 nodeName:}" failed. No retries permitted until 2023-02-03 22:54:12.062919224 +0000 UTC m=+28.958349039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ad243c0f-dab8-4bbd-8686-d18f20c141a4-config-volume") pod "coredns-6d4b75cb6d-n5ths" (UID: "ad243c0f-dab8-4bbd-8686-d18f20c141a4") : object "kube-system"/"coredns" not registered
Feb 03 22:54:12 test-preload-617863 kubelet[823]: E0203 22:54:12.271605 823 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-389449610 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28: file exists"
Feb 03 22:54:12 test-preload-617863 kubelet[823]: E0203 22:54:12.271727 823 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-389449610 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28: file exists" pod="kube-system/coredns-6d4b75cb6d-n5ths"
Feb 03 22:54:12 test-preload-617863 kubelet[823]: E0203 22:54:12.271749 823 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-389449610 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28: file exists" pod="kube-system/coredns-6d4b75cb6d-n5ths"
Feb 03 22:54:12 test-preload-617863 kubelet[823]: E0203 22:54:12.271790 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-n5ths_kube-system(ad243c0f-dab8-4bbd-8686-d18f20c141a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-n5ths_kube-system(ad243c0f-dab8-4bbd-8686-d18f20c141a4)\\\": rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-389449610 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28: file exists\"" pod="kube-system/coredns-6d4b75cb6d-n5ths" podUID=ad243c0f-dab8-4bbd-8686-d18f20c141a4
Feb 03 22:54:15 test-preload-617863 kubelet[823]: E0203 22:54:15.721425 823 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3660775944 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists" podSandboxID="9d5c14c0479df70803b0b8acd900b07b0e06e2dc5a3c267ee139bf2a80941829"
Feb 03 22:54:15 test-preload-617863 kubelet[823]: E0203 22:54:15.721512 823 kuberuntime_manager.go:905] container &Container{Name:kube-scheduler,Image:k8s.gcr.io/kube-scheduler:v1.24.4,Command:[kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=false],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/scheduler.conf,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThre
shold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},} start failed in pod kube-scheduler-test-preload-617863_kube-system(7bafea365c3d15c25b9d7c9cd28a0f22): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3660775944 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists
Feb 03 22:54:15 test-preload-617863 kubelet[823]: E0203 22:54:15.721540 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3660775944 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists\"" pod="kube-system/kube-scheduler-test-preload-617863" podUID=7bafea365c3d15c25b9d7c9cd28a0f22
Feb 03 22:54:16 test-preload-617863 kubelet[823]: E0203 22:54:16.998101 823 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1249965317 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists" podSandboxID="7c905979b9a3b1df1c971b81f66e22b1906875019b08b93c8e7502128620e7ce"
Feb 03 22:54:16 test-preload-617863 kubelet[823]: E0203 22:54:16.998183 823 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6qf99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(7eafb012-4ea5-4377-8604-
a32dc2a7b454): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1249965317 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists
Feb 03 22:54:16 test-preload-617863 kubelet[823]: E0203 22:54:16.998214 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1249965317 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists\"" pod="kube-system/storage-provisioner" podUID=7eafb012-4ea5-4377-8604-a32dc2a7b454
Feb 03 22:54:25 test-preload-617863 kubelet[823]: E0203 22:54:25.362362 823 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1263383645 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists"
Feb 03 22:54:25 test-preload-617863 kubelet[823]: E0203 22:54:25.362402 823 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1263383645 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists" pod="kube-system/coredns-6d4b75cb6d-n5ths"
Feb 03 22:54:25 test-preload-617863 kubelet[823]: E0203 22:54:25.362422 823 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1263383645 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists" pod="kube-system/coredns-6d4b75cb6d-n5ths"
Feb 03 22:54:25 test-preload-617863 kubelet[823]: E0203 22:54:25.362460 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-n5ths_kube-system(ad243c0f-dab8-4bbd-8686-d18f20c141a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-n5ths_kube-system(ad243c0f-dab8-4bbd-8686-d18f20c141a4)\\\": rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1263383645 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists\"" pod="kube-system/coredns-6d4b75cb6d-n5ths" podUID=ad243c0f-dab8-4bbd-8686-d18f20c141a4
Feb 03 22:54:39 test-preload-617863 kubelet[823]: E0203 22:54:39.768879 823 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3871188931 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists" podSandboxID="9049619bedcc1d43a1cf28f78df3b5bf1e56c5266141a3df743792322174546d"
Feb 03 22:54:39 test-preload-617863 kubelet[823]: E0203 22:54:39.769112 823 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8h5xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-n5ths_kube-system(ad243c0f-dab8-4bbd-8686-d18f20c141a4): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3871188931 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists
Feb 03 22:54:39 test-preload-617863 kubelet[823]: E0203 22:54:39.769598 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3871188931 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists\"" pod="kube-system/coredns-6d4b75cb6d-n5ths" podUID=ad243c0f-dab8-4bbd-8686-d18f20c141a4
*
* ==> storage-provisioner [9838f214811abab0390357c02d1346d4a5735a9c2b0591f56bf609b5d73c4d8c] <==
* I0203 22:54:17.678654 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0203 22:54:17.700393 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0203 22:54:17.700766 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0203 22:54:35.133362 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0203 22:54:35.135169 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-617863_754712e8-ab77-4daa-ab41-0e66485bf893!
I0203 22:54:35.137104 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dcdcf43b-6c1d-4aba-b8e5-729ea2b4076e", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-617863_754712e8-ab77-4daa-ab41-0e66485bf893 became leader
I0203 22:54:35.236253 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-617863_754712e8-ab77-4daa-ab41-0e66485bf893!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-617863 -n test-preload-617863
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-617863 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-617863" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-617863
--- FAIL: TestPreload (341.95s)