=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-113143 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0223 05:04:27.391311 10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-113143 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (2m3.684060563s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-113143 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-113143 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.401288624s)
preload_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p test-preload-113143
E0223 05:05:45.125780 10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 05:06:24.343379 10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-113143: (1m32.19821159s)
preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-113143 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd
E0223 05:07:50.400339 10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-113143 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd: (2m16.098011397s)
preload_test.go:80: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-113143 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.4 6cab9d1bed1be 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.4 1f99cb6da9a82 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-scheduler v1.24.4 03fa22539fc1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-02-23 05:09:33.487929216 +0000 UTC m=+2795.139514356
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-113143 -n test-preload-113143
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-113143 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-113143 logs -n 25: (1.111646148s)
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-945787 ssh -n | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
| | multinode-945787-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-945787 ssh -n multinode-945787 sudo cat | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
| | /home/docker/cp-test_multinode-945787-m03_multinode-945787.txt | | | | | |
| cp | multinode-945787 cp multinode-945787-m03:/home/docker/cp-test.txt | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
| | multinode-945787-m02:/home/docker/cp-test_multinode-945787-m03_multinode-945787-m02.txt | | | | | |
| ssh | multinode-945787 ssh -n | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
| | multinode-945787-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-945787 ssh -n multinode-945787-m02 sudo cat | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
| | /home/docker/cp-test_multinode-945787-m03_multinode-945787-m02.txt | | | | | |
| node | multinode-945787 node stop m03 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
| node | multinode-945787 node start | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:46 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-945787 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:46 UTC | |
| stop | -p multinode-945787 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:46 UTC | 23 Feb 23 04:49 UTC |
| start | -p multinode-945787 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:49 UTC | 23 Feb 23 04:54 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-945787 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:54 UTC | |
| node | multinode-945787 node delete | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:54 UTC | 23 Feb 23 04:54 UTC |
| | m03 | | | | | |
| stop | multinode-945787 stop | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:54 UTC | 23 Feb 23 04:57 UTC |
| start | -p multinode-945787 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 04:57 UTC | 23 Feb 23 05:02 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-945787 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 05:02 UTC | |
| start | -p multinode-945787-m02 | multinode-945787-m02 | jenkins | v1.29.0 | 23 Feb 23 05:02 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-945787-m03 | multinode-945787-m03 | jenkins | v1.29.0 | 23 Feb 23 05:02 UTC | 23 Feb 23 05:03 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-945787 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 05:03 UTC | |
| delete | -p multinode-945787-m03 | multinode-945787-m03 | jenkins | v1.29.0 | 23 Feb 23 05:03 UTC | 23 Feb 23 05:03 UTC |
| delete | -p multinode-945787 | multinode-945787 | jenkins | v1.29.0 | 23 Feb 23 05:03 UTC | 23 Feb 23 05:03 UTC |
| start | -p test-preload-113143 | test-preload-113143 | jenkins | v1.29.0 | 23 Feb 23 05:03 UTC | 23 Feb 23 05:05 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-113143 | test-preload-113143 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| stop | -p test-preload-113143 | test-preload-113143 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:07 UTC |
| start | -p test-preload-113143 | test-preload-113143 | jenkins | v1.29.0 | 23 Feb 23 05:07 UTC | 23 Feb 23 05:09 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p test-preload-113143 -- sudo | test-preload-113143 | jenkins | v1.29.0 | 23 Feb 23 05:09 UTC | 23 Feb 23 05:09 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/02/23 05:07:17
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0223 05:07:17.199394 25649 out.go:296] Setting OutFile to fd 1 ...
I0223 05:07:17.199549 25649 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 05:07:17.199556 25649 out.go:309] Setting ErrFile to fd 2...
I0223 05:07:17.199561 25649 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 05:07:17.199659 25649 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
I0223 05:07:17.200171 25649 out.go:303] Setting JSON to false
I0223 05:07:17.200968 25649 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2981,"bootTime":1677125856,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0223 05:07:17.201025 25649 start.go:135] virtualization: kvm guest
I0223 05:07:17.204770 25649 out.go:177] * [test-preload-113143] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0223 05:07:17.206833 25649 out.go:177] - MINIKUBE_LOCATION=15909
I0223 05:07:17.206781 25649 notify.go:220] Checking for updates...
I0223 05:07:17.208771 25649 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 05:07:17.210659 25649 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
I0223 05:07:17.212490 25649 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
I0223 05:07:17.214302 25649 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0223 05:07:17.216099 25649 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 05:07:17.218130 25649 config.go:182] Loaded profile config "test-preload-113143": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0223 05:07:17.218490 25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0223 05:07:17.218559 25649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:07:17.232539 25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
I0223 05:07:17.232909 25649 main.go:141] libmachine: () Calling .GetVersion
I0223 05:07:17.233570 25649 main.go:141] libmachine: Using API Version 1
I0223 05:07:17.233596 25649 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:07:17.233965 25649 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:07:17.234192 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:17.236476 25649 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
I0223 05:07:17.237947 25649 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 05:07:17.238316 25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0223 05:07:17.238354 25649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:07:17.251983 25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
I0223 05:07:17.252375 25649 main.go:141] libmachine: () Calling .GetVersion
I0223 05:07:17.252791 25649 main.go:141] libmachine: Using API Version 1
I0223 05:07:17.252812 25649 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:07:17.253117 25649 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:07:17.253314 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:17.287623 25649 out.go:177] * Using the kvm2 driver based on existing profile
I0223 05:07:17.289266 25649 start.go:296] selected driver: kvm2
I0223 05:07:17.289281 25649 start.go:857] validating driver "kvm2" against &{Name:test-preload-113143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-113143 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:07:17.289391 25649 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 05:07:17.290133 25649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:07:17.290199 25649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-3857/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0223 05:07:17.303744 25649 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0223 05:07:17.304036 25649 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0223 05:07:17.304074 25649 cni.go:84] Creating CNI manager for ""
I0223 05:07:17.304085 25649 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0223 05:07:17.304098 25649 start_flags.go:319] config:
{Name:test-preload-113143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-113143 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:07:17.304208 25649 iso.go:125] acquiring lock: {Name:mk5ab603b94a1c1bcf9332974dc395e96678ad02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:07:17.306352 25649 out.go:177] * Starting control plane node test-preload-113143 in cluster test-preload-113143
I0223 05:07:17.307999 25649 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0223 05:07:17.464405 25649 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0223 05:07:17.464448 25649 cache.go:57] Caching tarball of preloaded images
I0223 05:07:17.464636 25649 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0223 05:07:17.466821 25649 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0223 05:07:17.468476 25649 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0223 05:07:17.621987 25649 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0223 05:07:40.595263 25649 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0223 05:07:40.595367 25649 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0223 05:07:41.455984 25649 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0223 05:07:41.456125 25649 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/config.json ...
I0223 05:07:41.456330 25649 cache.go:193] Successfully downloaded all kic artifacts
I0223 05:07:41.456360 25649 start.go:364] acquiring machines lock for test-preload-113143: {Name:mke4f23d5c0e3b1877e0c2e0b8619868f067380e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0223 05:07:41.456413 25649 start.go:368] acquired machines lock for "test-preload-113143" in 37.228µs
I0223 05:07:41.456428 25649 start.go:96] Skipping create...Using existing machine configuration
I0223 05:07:41.456435 25649 fix.go:55] fixHost starting:
I0223 05:07:41.456739 25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0223 05:07:41.456774 25649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:07:41.471020 25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
I0223 05:07:41.471511 25649 main.go:141] libmachine: () Calling .GetVersion
I0223 05:07:41.472139 25649 main.go:141] libmachine: Using API Version 1
I0223 05:07:41.472162 25649 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:07:41.472538 25649 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:07:41.472766 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:41.472947 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetState
I0223 05:07:41.474757 25649 fix.go:103] recreateIfNeeded on test-preload-113143: state=Stopped err=<nil>
I0223 05:07:41.474788 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
W0223 05:07:41.474942 25649 fix.go:129] unexpected machine state, will restart: <nil>
I0223 05:07:41.477589 25649 out.go:177] * Restarting existing kvm2 VM for "test-preload-113143" ...
I0223 05:07:41.479402 25649 main.go:141] libmachine: (test-preload-113143) Calling .Start
I0223 05:07:41.479614 25649 main.go:141] libmachine: (test-preload-113143) Ensuring networks are active...
I0223 05:07:41.480404 25649 main.go:141] libmachine: (test-preload-113143) Ensuring network default is active
I0223 05:07:41.480929 25649 main.go:141] libmachine: (test-preload-113143) Ensuring network mk-test-preload-113143 is active
I0223 05:07:41.481371 25649 main.go:141] libmachine: (test-preload-113143) Getting domain xml...
I0223 05:07:41.482092 25649 main.go:141] libmachine: (test-preload-113143) Creating domain...
I0223 05:07:42.718470 25649 main.go:141] libmachine: (test-preload-113143) Waiting to get IP...
I0223 05:07:42.719286 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:42.719790 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:42.719898 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:42.719802 25685 retry.go:31] will retry after 242.200393ms: waiting for machine to come up
I0223 05:07:42.963258 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:42.963708 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:42.963731 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:42.963656 25685 retry.go:31] will retry after 245.679752ms: waiting for machine to come up
I0223 05:07:43.211198 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:43.211673 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:43.211701 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:43.211642 25685 retry.go:31] will retry after 312.378164ms: waiting for machine to come up
I0223 05:07:43.525218 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:43.525735 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:43.525766 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:43.525678 25685 retry.go:31] will retry after 371.12386ms: waiting for machine to come up
I0223 05:07:43.898112 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:43.898567 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:43.898593 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:43.898516 25685 retry.go:31] will retry after 472.035541ms: waiting for machine to come up
I0223 05:07:44.372140 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:44.372567 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:44.372584 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:44.372505 25685 retry.go:31] will retry after 867.802289ms: waiting for machine to come up
I0223 05:07:45.241677 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:45.242106 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:45.242138 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:45.242037 25685 retry.go:31] will retry after 1.053402506s: waiting for machine to come up
I0223 05:07:46.297149 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:46.297595 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:46.297627 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:46.297528 25685 retry.go:31] will retry after 1.268095409s: waiting for machine to come up
I0223 05:07:47.567342 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:47.567757 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:47.567787 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:47.567706 25685 retry.go:31] will retry after 1.549144571s: waiting for machine to come up
I0223 05:07:49.118344 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:49.118788 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:49.118823 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:49.118727 25685 retry.go:31] will retry after 1.399464384s: waiting for machine to come up
I0223 05:07:50.520326 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:50.520769 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:50.520798 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:50.520715 25685 retry.go:31] will retry after 1.965483635s: waiting for machine to come up
I0223 05:07:52.487224 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:52.487674 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:52.487694 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:52.487618 25685 retry.go:31] will retry after 2.653586815s: waiting for machine to come up
I0223 05:07:55.144303 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:55.144681 25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
I0223 05:07:55.144705 25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:55.144631 25685 retry.go:31] will retry after 3.236103195s: waiting for machine to come up
I0223 05:07:58.381962 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.382485 25649 main.go:141] libmachine: (test-preload-113143) Found IP for machine: 192.168.39.53
I0223 05:07:58.382507 25649 main.go:141] libmachine: (test-preload-113143) Reserving static IP address...
I0223 05:07:58.382517 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has current primary IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.382996 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "test-preload-113143", mac: "52:54:00:16:b0:47", ip: "192.168.39.53"} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:58.383019 25649 main.go:141] libmachine: (test-preload-113143) Reserved static IP address: 192.168.39.53
I0223 05:07:58.383036 25649 main.go:141] libmachine: (test-preload-113143) DBG | skip adding static IP to network mk-test-preload-113143 - found existing host DHCP lease matching {name: "test-preload-113143", mac: "52:54:00:16:b0:47", ip: "192.168.39.53"}
I0223 05:07:58.383051 25649 main.go:141] libmachine: (test-preload-113143) Waiting for SSH to be available...
I0223 05:07:58.383087 25649 main.go:141] libmachine: (test-preload-113143) DBG | Getting to WaitForSSH function...
I0223 05:07:58.385204 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.385496 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:58.385528 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.385609 25649 main.go:141] libmachine: (test-preload-113143) DBG | Using SSH client type: external
I0223 05:07:58.385641 25649 main.go:141] libmachine: (test-preload-113143) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa (-rw-------)
I0223 05:07:58.385670 25649 main.go:141] libmachine: (test-preload-113143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa -p 22] /usr/bin/ssh <nil>}
I0223 05:07:58.385686 25649 main.go:141] libmachine: (test-preload-113143) DBG | About to run SSH command:
I0223 05:07:58.385699 25649 main.go:141] libmachine: (test-preload-113143) DBG | exit 0
I0223 05:07:58.481029 25649 main.go:141] libmachine: (test-preload-113143) DBG | SSH cmd err, output: <nil>:
I0223 05:07:58.481410 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetConfigRaw
I0223 05:07:58.482045 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetIP
I0223 05:07:58.484716 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.485082 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:58.485118 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.485307 25649 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/config.json ...
I0223 05:07:58.485495 25649 machine.go:88] provisioning docker machine ...
I0223 05:07:58.485513 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:58.485728 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetMachineName
I0223 05:07:58.485903 25649 buildroot.go:166] provisioning hostname "test-preload-113143"
I0223 05:07:58.485935 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetMachineName
I0223 05:07:58.486085 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:07:58.488073 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.488445 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:58.488475 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.488585 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:07:58.488740 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:58.488877 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:58.489047 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:07:58.489256 25649 main.go:141] libmachine: Using SSH client type: native
I0223 05:07:58.489742 25649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.53 22 <nil> <nil>}
I0223 05:07:58.489756 25649 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-113143 && echo "test-preload-113143" | sudo tee /etc/hostname
I0223 05:07:58.633892 25649 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-113143
I0223 05:07:58.633930 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:07:58.636812 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.637259 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:58.637291 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.637540 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:07:58.637751 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:58.637948 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:58.638197 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:07:58.638392 25649 main.go:141] libmachine: Using SSH client type: native
I0223 05:07:58.638790 25649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.53 22 <nil> <nil>}
I0223 05:07:58.638810 25649 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-113143' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-113143/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-113143' | sudo tee -a /etc/hosts;
fi
fi
I0223 05:07:58.777962 25649 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 05:07:58.777993 25649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3857/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3857/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3857/.minikube}
I0223 05:07:58.778013 25649 buildroot.go:174] setting up certificates
I0223 05:07:58.778020 25649 provision.go:83] configureAuth start
I0223 05:07:58.778029 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetMachineName
I0223 05:07:58.778345 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetIP
I0223 05:07:58.781234 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.781560 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:58.781590 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.781706 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:07:58.784153 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.784557 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:58.784574 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.784703 25649 provision.go:138] copyHostCerts
I0223 05:07:58.784771 25649 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3857/.minikube/ca.pem, removing ...
I0223 05:07:58.784781 25649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3857/.minikube/ca.pem
I0223 05:07:58.784860 25649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3857/.minikube/ca.pem (1082 bytes)
I0223 05:07:58.784962 25649 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3857/.minikube/cert.pem, removing ...
I0223 05:07:58.784985 25649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3857/.minikube/cert.pem
I0223 05:07:58.785022 25649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3857/.minikube/cert.pem (1123 bytes)
I0223 05:07:58.785207 25649 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3857/.minikube/key.pem, removing ...
I0223 05:07:58.785223 25649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3857/.minikube/key.pem
I0223 05:07:58.785279 25649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3857/.minikube/key.pem (1679 bytes)
I0223 05:07:58.785363 25649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3857/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca-key.pem org=jenkins.test-preload-113143 san=[192.168.39.53 192.168.39.53 localhost 127.0.0.1 minikube test-preload-113143]
I0223 05:07:58.929059 25649 provision.go:172] copyRemoteCerts
I0223 05:07:58.929113 25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 05:07:58.929135 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:07:58.932008 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.932363 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:58.932388 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:58.932586 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:07:58.932843 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:58.933029 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:07:58.933203 25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
I0223 05:07:59.027161 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0223 05:07:59.051784 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0223 05:07:59.072983 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0223 05:07:59.094820 25649 provision.go:86] duration metric: configureAuth took 316.788731ms
I0223 05:07:59.094847 25649 buildroot.go:189] setting minikube options for container-runtime
I0223 05:07:59.095030 25649 config.go:182] Loaded profile config "test-preload-113143": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0223 05:07:59.095044 25649 machine.go:91] provisioned docker machine in 609.537637ms
I0223 05:07:59.095050 25649 start.go:300] post-start starting for "test-preload-113143" (driver="kvm2")
I0223 05:07:59.095058 25649 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 05:07:59.095091 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:59.095414 25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 05:07:59.095440 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:07:59.098119 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.098451 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:59.098481 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.098647 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:07:59.098798 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:59.098942 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:07:59.099070 25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
I0223 05:07:59.195290 25649 ssh_runner.go:195] Run: cat /etc/os-release
I0223 05:07:59.199526 25649 info.go:137] Remote host: Buildroot 2021.02.12
I0223 05:07:59.199546 25649 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3857/.minikube/addons for local assets ...
I0223 05:07:59.199610 25649 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3857/.minikube/files for local assets ...
I0223 05:07:59.199677 25649 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/108972.pem -> 108972.pem in /etc/ssl/certs
I0223 05:07:59.199755 25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 05:07:59.208817 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/108972.pem --> /etc/ssl/certs/108972.pem (1708 bytes)
I0223 05:07:59.230027 25649 start.go:303] post-start completed in 134.962953ms
I0223 05:07:59.230057 25649 fix.go:57] fixHost completed within 17.773619763s
I0223 05:07:59.230082 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:07:59.232783 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.233222 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:59.233249 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.233501 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:07:59.233664 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:59.233812 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:59.233917 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:07:59.234089 25649 main.go:141] libmachine: Using SSH client type: native
I0223 05:07:59.234589 25649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.53 22 <nil> <nil>}
I0223 05:07:59.234604 25649 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0223 05:07:59.365976 25649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677128879.330753450
I0223 05:07:59.366001 25649 fix.go:207] guest clock: 1677128879.330753450
I0223 05:07:59.366011 25649 fix.go:220] Guest: 2023-02-23 05:07:59.33075345 +0000 UTC Remote: 2023-02-23 05:07:59.2300616 +0000 UTC m=+42.069074072 (delta=100.69185ms)
I0223 05:07:59.366031 25649 fix.go:191] guest clock delta is within tolerance: 100.69185ms
I0223 05:07:59.366036 25649 start.go:83] releasing machines lock for "test-preload-113143", held for 17.909612918s
I0223 05:07:59.366054 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:59.366319 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetIP
I0223 05:07:59.369119 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.369450 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:59.369478 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.369655 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:59.370120 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:59.370279 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:07:59.370389 25649 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0223 05:07:59.370428 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:07:59.370465 25649 ssh_runner.go:195] Run: cat /version.json
I0223 05:07:59.370488 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:07:59.372856 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.373194 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:59.373222 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.373242 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.373360 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:07:59.373588 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:59.373691 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:07:59.373722 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:07:59.373757 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:07:59.373909 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:07:59.373986 25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
I0223 05:07:59.374124 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:07:59.374253 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:07:59.374384 25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
I0223 05:07:59.462006 25649 ssh_runner.go:195] Run: systemctl --version
I0223 05:07:59.587244 25649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0223 05:07:59.593062 25649 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0223 05:07:59.593140 25649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0223 05:07:59.609722 25649 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0223 05:07:59.609744 25649 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0223 05:07:59.609845 25649 ssh_runner.go:195] Run: sudo crictl images --output json
I0223 05:08:03.640598 25649 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.030724865s)
I0223 05:08:03.640731 25649 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0223 05:08:03.640780 25649 ssh_runner.go:195] Run: which lz4
I0223 05:08:03.644860 25649 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0223 05:08:03.648854 25649 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0223 05:08:03.648888 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0223 05:08:05.429827 25649 containerd.go:551] Took 1.784998 seconds to copy over tarball
I0223 05:08:05.429913 25649 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0223 05:08:08.520478 25649 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.090542368s)
I0223 05:08:08.520502 25649 containerd.go:558] Took 3.090646 seconds to extract the tarball
I0223 05:08:08.520510 25649 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0223 05:08:08.560242 25649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 05:08:08.653618 25649 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 05:08:08.670957 25649 start.go:485] detecting cgroup driver to use...
I0223 05:08:08.671028 25649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0223 05:08:11.328472 25649 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.657426272s)
I0223 05:08:11.328526 25649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 05:08:11.341485 25649 docker.go:186] disabling cri-docker service (if available) ...
I0223 05:08:11.341556 25649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0223 05:08:11.356921 25649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0223 05:08:11.371823 25649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0223 05:08:11.472386 25649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0223 05:08:11.572476 25649 docker.go:202] disabling docker service ...
I0223 05:08:11.572540 25649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0223 05:08:11.587829 25649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0223 05:08:11.600726 25649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0223 05:08:11.700527 25649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0223 05:08:11.795882 25649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0223 05:08:11.809587 25649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 05:08:11.829239 25649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
I0223 05:08:11.838813 25649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 05:08:11.848244 25649 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 05:08:11.848310 25649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 05:08:11.857628 25649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 05:08:11.866681 25649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 05:08:11.875817 25649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 05:08:11.884840 25649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 05:08:11.894438 25649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 05:08:11.903524 25649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 05:08:11.911821 25649 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0223 05:08:11.911887 25649 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0223 05:08:11.925604 25649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 05:08:11.934687 25649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 05:08:12.029355 25649 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 05:08:12.051952 25649 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
I0223 05:08:12.052031 25649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0223 05:08:12.059508 25649 retry.go:31] will retry after 1.231814604s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0223 05:08:13.292172 25649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0223 05:08:13.297608 25649 start.go:553] Will wait 60s for crictl version
I0223 05:08:13.297683 25649 ssh_runner.go:195] Run: which crictl
I0223 05:08:13.301559 25649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0223 05:08:13.334139 25649 start.go:569] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.15
RuntimeApiVersion: v1alpha2
I0223 05:08:13.334214 25649 ssh_runner.go:195] Run: containerd --version
I0223 05:08:13.361722 25649 ssh_runner.go:195] Run: containerd --version
I0223 05:08:13.391188 25649 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.15 ...
I0223 05:08:13.393100 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetIP
I0223 05:08:13.396316 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:08:13.396740 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:08:13.396769 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:08:13.397018 25649 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0223 05:08:13.401321 25649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 05:08:13.412909 25649 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0223 05:08:13.412989 25649 ssh_runner.go:195] Run: sudo crictl images --output json
I0223 05:08:13.442114 25649 containerd.go:608] all images are preloaded for containerd runtime.
I0223 05:08:13.442137 25649 containerd.go:522] Images already preloaded, skipping extraction
I0223 05:08:13.442192 25649 ssh_runner.go:195] Run: sudo crictl images --output json
I0223 05:08:13.472062 25649 containerd.go:608] all images are preloaded for containerd runtime.
I0223 05:08:13.472089 25649 cache_images.go:84] Images are preloaded, skipping loading
I0223 05:08:13.472146 25649 ssh_runner.go:195] Run: sudo crictl info
I0223 05:08:13.503198 25649 cni.go:84] Creating CNI manager for ""
I0223 05:08:13.503218 25649 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0223 05:08:13.503233 25649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0223 05:08:13.503250 25649 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-113143 NodeName:test-preload-113143 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0223 05:08:13.503346 25649 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.53
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-113143"
kubeletExtraArgs:
node-ip: 192.168.39.53
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0223 05:08:13.503420 25649 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-113143 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-113143 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0223 05:08:13.503466 25649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0223 05:08:13.513124 25649 binaries.go:44] Found k8s binaries, skipping transfer
I0223 05:08:13.513198 25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0223 05:08:13.521506 25649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (483 bytes)
I0223 05:08:13.537467 25649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0223 05:08:13.553753 25649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
I0223 05:08:13.569923 25649 ssh_runner.go:195] Run: grep 192.168.39.53 control-plane.minikube.internal$ /etc/hosts
I0223 05:08:13.573737 25649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 05:08:13.585191 25649 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143 for IP: 192.168.39.53
I0223 05:08:13.585224 25649 certs.go:186] acquiring lock for shared ca certs: {Name:mk147ec0d78f2171aa54104168d81016e3102ce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:08:13.585405 25649 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3857/.minikube/ca.key
I0223 05:08:13.585460 25649 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3857/.minikube/proxy-client-ca.key
I0223 05:08:13.585552 25649 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.key
I0223 05:08:13.585623 25649 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/apiserver.key.52e6c991
I0223 05:08:13.585679 25649 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/proxy-client.key
I0223 05:08:13.585799 25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/10897.pem (1338 bytes)
W0223 05:08:13.585848 25649 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/10897_empty.pem, impossibly tiny 0 bytes
I0223 05:08:13.585863 25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca-key.pem (1679 bytes)
I0223 05:08:13.585888 25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem (1082 bytes)
I0223 05:08:13.585911 25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/cert.pem (1123 bytes)
I0223 05:08:13.585939 25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/key.pem (1679 bytes)
I0223 05:08:13.585977 25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/108972.pem (1708 bytes)
I0223 05:08:13.586474 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0223 05:08:13.609249 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0223 05:08:13.631953 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0223 05:08:13.654262 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0223 05:08:13.676005 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0223 05:08:13.698579 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0223 05:08:13.720511 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0223 05:08:13.742388 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0223 05:08:13.764734 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0223 05:08:13.786568 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/certs/10897.pem --> /usr/share/ca-certificates/10897.pem (1338 bytes)
I0223 05:08:13.808858 25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/108972.pem --> /usr/share/ca-certificates/108972.pem (1708 bytes)
I0223 05:08:13.830415 25649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0223 05:08:13.846105 25649 ssh_runner.go:195] Run: openssl version
I0223 05:08:13.851624 25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108972.pem && ln -fs /usr/share/ca-certificates/108972.pem /etc/ssl/certs/108972.pem"
I0223 05:08:13.862169 25649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108972.pem
I0223 05:08:13.866777 25649 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:30 /usr/share/ca-certificates/108972.pem
I0223 05:08:13.866834 25649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108972.pem
I0223 05:08:13.872260 25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/108972.pem /etc/ssl/certs/3ec20f2e.0"
I0223 05:08:13.882611 25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0223 05:08:13.892746 25649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0223 05:08:13.897423 25649 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:24 /usr/share/ca-certificates/minikubeCA.pem
I0223 05:08:13.897478 25649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0223 05:08:13.903129 25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0223 05:08:13.913460 25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10897.pem && ln -fs /usr/share/ca-certificates/10897.pem /etc/ssl/certs/10897.pem"
I0223 05:08:13.923690 25649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10897.pem
I0223 05:08:13.928178 25649 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:30 /usr/share/ca-certificates/10897.pem
I0223 05:08:13.928231 25649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10897.pem
I0223 05:08:13.933797 25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10897.pem /etc/ssl/certs/51391683.0"
I0223 05:08:13.943997 25649 kubeadm.go:401] StartCluster: {Name:test-preload-113143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-113143 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:08:13.944113 25649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0223 05:08:13.944176 25649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0223 05:08:13.973285 25649 cri.go:87] found id: ""
I0223 05:08:13.973361 25649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0223 05:08:13.983168 25649 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0223 05:08:13.983190 25649 kubeadm.go:633] restartCluster start
I0223 05:08:13.983244 25649 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0223 05:08:13.992978 25649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0223 05:08:13.993473 25649 kubeconfig.go:135] verify returned: extract IP: "test-preload-113143" does not appear in /home/jenkins/minikube-integration/15909-3857/kubeconfig
I0223 05:08:13.993600 25649 kubeconfig.go:146] "test-preload-113143" context is missing from /home/jenkins/minikube-integration/15909-3857/kubeconfig - will repair!
I0223 05:08:13.994030 25649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3857/kubeconfig: {Name:mkddc8f3473e702a00229e22f9312b560d0d7a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:08:13.994960 25649 kapi.go:59] client config for test-preload-113143: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 05:08:13.996111 25649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0223 05:08:14.005368 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:14.005419 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:14.016483 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:14.517221 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:14.517310 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:14.530287 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:15.016883 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:15.016976 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:15.029362 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:15.516921 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:15.517009 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:15.529274 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:16.016755 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:16.016828 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:16.029247 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:16.517481 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:16.517579 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:16.529984 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:17.016556 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:17.016655 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:17.029274 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:17.517512 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:17.517611 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:17.529815 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:18.017453 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:18.017570 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:18.030745 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:18.517334 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:18.517437 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:18.530492 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:19.017047 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:19.017119 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:19.029048 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:19.516639 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:19.516717 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:19.528888 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:20.017004 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:20.017076 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:20.029538 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:20.517078 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:20.517182 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:20.529300 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:21.016879 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:21.016949 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:21.029021 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:21.517030 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:21.517148 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:21.529484 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:22.016972 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:22.017085 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:22.030420 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:22.517206 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:22.517283 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:22.530610 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:23.017232 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:23.017317 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:23.029494 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:23.517088 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:23.517192 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:23.529324 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:24.017036 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:24.017116 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:24.029048 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:24.029073 25649 api_server.go:165] Checking apiserver status ...
I0223 05:08:24.029121 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:08:24.040476 25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:08:24.040517 25649 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0223 05:08:24.040525 25649 kubeadm.go:1120] stopping kube-system containers ...
I0223 05:08:24.040537 25649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0223 05:08:24.040582 25649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0223 05:08:24.071808 25649 cri.go:87] found id: ""
I0223 05:08:24.071876 25649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0223 05:08:24.088332 25649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 05:08:24.096887 25649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 05:08:24.096935 25649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0223 05:08:24.105277 25649 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0223 05:08:24.105300 25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:08:24.211352 25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:08:25.132391 25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:08:25.476116 25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:08:25.549785 25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:08:25.644028 25649 api_server.go:51] waiting for apiserver process to appear ...
I0223 05:08:25.644110 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:08:26.156509 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:08:26.656373 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:08:27.156841 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:08:27.168423 25649 api_server.go:71] duration metric: took 1.524405489s to wait for apiserver process to appear ...
I0223 05:08:27.168455 25649 api_server.go:87] waiting for apiserver healthz status ...
I0223 05:08:27.168468 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:32.169192 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0223 05:08:32.670214 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:37.671107 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0223 05:08:38.169742 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:43.169986 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0223 05:08:43.669634 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:47.251123 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": read tcp 192.168.39.1:55308->192.168.39.53:8443: read: connection reset by peer
I0223 05:08:47.669523 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:47.670161 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:48.169917 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:48.170615 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:48.670314 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:48.670973 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:49.169433 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:49.169999 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:49.669569 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:49.670193 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:50.169433 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:50.169986 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:50.669588 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:50.670279 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:51.169429 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:51.170030 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:51.670242 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:51.670925 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:52.169438 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:52.169977 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:52.669695 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:52.670331 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:53.170027 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:53.170727 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:53.669326 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:53.669962 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:54.169768 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:54.170431 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:54.670108 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:54.670782 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:55.169349 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:55.170008 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:55.669573 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:55.670246 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:56.169781 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:56.170387 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:56.669437 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:56.670119 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:57.169798 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:57.170464 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:57.670175 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:57.670846 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:58.169434 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:58.170118 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:58.669689 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:58.670301 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:59.169967 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:59.170645 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:08:59.670303 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:08:59.670866 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:00.169447 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:00.170015 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:00.669609 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:00.670242 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:01.169844 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:01.170442 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:01.669666 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:01.670286 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:02.169968 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:02.170504 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:02.669327 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:02.669990 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:03.169523 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:03.170092 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:03.669673 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:03.670402 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:04.170162 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:04.170804 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:04.670373 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:04.670978 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:05.169540 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:05.170213 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:05.669969 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:05.670646 25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
I0223 05:09:06.170307 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:09.053386 25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0223 05:09:09.053420 25649 api_server.go:102] status: https://192.168.39.53:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0223 05:09:09.169642 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:09.179246 25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 05:09:09.179281 25649 api_server.go:102] status: https://192.168.39.53:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 05:09:09.669828 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:09.679957 25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 05:09:09.679988 25649 api_server.go:102] status: https://192.168.39.53:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 05:09:10.169518 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:10.175682 25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 05:09:10.175706 25649 api_server.go:102] status: https://192.168.39.53:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 05:09:10.669296 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:10.675585 25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 200:
ok
I0223 05:09:10.683086 25649 api_server.go:140] control plane version: v1.24.4
I0223 05:09:10.683109 25649 api_server.go:130] duration metric: took 43.514648081s to wait for apiserver health ...
I0223 05:09:10.683119 25649 cni.go:84] Creating CNI manager for ""
I0223 05:09:10.683125 25649 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0223 05:09:10.685580 25649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0223 05:09:10.687507 25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0223 05:09:10.698779 25649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0223 05:09:10.719301 25649 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 05:09:10.729302 25649 system_pods.go:59] 7 kube-system pods found
I0223 05:09:10.729336 25649 system_pods.go:61] "coredns-6d4b75cb6d-mmpvt" [3928e1dc-58bd-434f-bc29-8c20afb5e112] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0223 05:09:10.729342 25649 system_pods.go:61] "etcd-test-preload-113143" [65f0e6f1-4ff2-49bd-9f2f-58967808df14] Running
I0223 05:09:10.729348 25649 system_pods.go:61] "kube-apiserver-test-preload-113143" [e28969a2-5979-483e-bd07-658187cffae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0223 05:09:10.729354 25649 system_pods.go:61] "kube-controller-manager-test-preload-113143" [055f8ab8-0181-4121-8993-88d236e645c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0223 05:09:10.729366 25649 system_pods.go:61] "kube-proxy-bq8xz" [b957cd83-fc56-48cc-a924-775e7a3ad79f] Running
I0223 05:09:10.729370 25649 system_pods.go:61] "kube-scheduler-test-preload-113143" [901702d4-f84c-4418-a3df-ea323600a55d] Running
I0223 05:09:10.729375 25649 system_pods.go:61] "storage-provisioner" [a4976d12-2647-4fa6-8366-5d94a2155a2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0223 05:09:10.729380 25649 system_pods.go:74] duration metric: took 10.059269ms to wait for pod list to return data ...
I0223 05:09:10.729386 25649 node_conditions.go:102] verifying NodePressure condition ...
I0223 05:09:10.732752 25649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 05:09:10.732785 25649 node_conditions.go:123] node cpu capacity is 2
I0223 05:09:10.732804 25649 node_conditions.go:105] duration metric: took 3.413596ms to run NodePressure ...
I0223 05:09:10.732822 25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:09:10.949088 25649 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0223 05:09:10.953473 25649 kubeadm.go:784] kubelet initialised
I0223 05:09:10.953498 25649 kubeadm.go:785] duration metric: took 4.383999ms waiting for restarted kubelet to initialise ...
I0223 05:09:10.953506 25649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:09:10.958494 25649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace to be "Ready" ...
I0223 05:09:11.975358 25649 pod_ready.go:97] node "test-preload-113143" hosting pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:11.975388 25649 pod_ready.go:81] duration metric: took 1.016870166s waiting for pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace to be "Ready" ...
E0223 05:09:11.975396 25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:11.975402 25649 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:11.980129 25649 pod_ready.go:97] node "test-preload-113143" hosting pod "etcd-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:11.980146 25649 pod_ready.go:81] duration metric: took 4.738654ms waiting for pod "etcd-test-preload-113143" in "kube-system" namespace to be "Ready" ...
E0223 05:09:11.980153 25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "etcd-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:11.980159 25649 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:11.984045 25649 pod_ready.go:97] node "test-preload-113143" hosting pod "kube-apiserver-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:11.984063 25649 pod_ready.go:81] duration metric: took 3.898484ms waiting for pod "kube-apiserver-test-preload-113143" in "kube-system" namespace to be "Ready" ...
E0223 05:09:11.984071 25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "kube-apiserver-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:11.984076 25649 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:11.988262 25649 pod_ready.go:97] node "test-preload-113143" hosting pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:11.988280 25649 pod_ready.go:81] duration metric: took 4.198948ms waiting for pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace to be "Ready" ...
E0223 05:09:11.988287 25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:11.988292 25649 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bq8xz" in "kube-system" namespace to be "Ready" ...
I0223 05:09:12.323428 25649 pod_ready.go:97] node "test-preload-113143" hosting pod "kube-proxy-bq8xz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:12.323456 25649 pod_ready.go:81] duration metric: took 335.157819ms waiting for pod "kube-proxy-bq8xz" in "kube-system" namespace to be "Ready" ...
E0223 05:09:12.323466 25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "kube-proxy-bq8xz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:12.323475 25649 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:12.723593 25649 pod_ready.go:97] node "test-preload-113143" hosting pod "kube-scheduler-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:12.723619 25649 pod_ready.go:81] duration metric: took 400.136639ms waiting for pod "kube-scheduler-test-preload-113143" in "kube-system" namespace to be "Ready" ...
E0223 05:09:12.723630 25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "kube-scheduler-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
I0223 05:09:12.723639 25649 pod_ready.go:38] duration metric: took 1.770125437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:09:12.723657 25649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0223 05:09:12.735111 25649 ops.go:34] apiserver oom_adj: -16
I0223 05:09:12.735135 25649 kubeadm.go:637] restartCluster took 58.7519382s
I0223 05:09:12.735144 25649 kubeadm.go:403] StartCluster complete in 58.791151978s
I0223 05:09:12.735164 25649 settings.go:142] acquiring lock: {Name:mka9282d684f4d0ba7e9349607973a3a5eb0818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:09:12.735244 25649 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15909-3857/kubeconfig
I0223 05:09:12.735870 25649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3857/kubeconfig: {Name:mkddc8f3473e702a00229e22f9312b560d0d7a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:09:12.736109 25649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0223 05:09:12.736198 25649 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0223 05:09:12.736282 25649 addons.go:65] Setting storage-provisioner=true in profile "test-preload-113143"
I0223 05:09:12.736296 25649 addons.go:227] Setting addon storage-provisioner=true in "test-preload-113143"
W0223 05:09:12.736304 25649 addons.go:236] addon storage-provisioner should already be in state true
I0223 05:09:12.736361 25649 host.go:66] Checking if "test-preload-113143" exists ...
I0223 05:09:12.736354 25649 addons.go:65] Setting default-storageclass=true in profile "test-preload-113143"
I0223 05:09:12.736400 25649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-113143"
I0223 05:09:12.736406 25649 config.go:182] Loaded profile config "test-preload-113143": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0223 05:09:12.736712 25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0223 05:09:12.736671 25649 kapi.go:59] client config for test-preload-113143: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 05:09:12.736757 25649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:09:12.736871 25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0223 05:09:12.736930 25649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:09:12.740166 25649 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-113143" context rescaled to 1 replicas
I0223 05:09:12.740200 25649 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0223 05:09:12.743792 25649 out.go:177] * Verifying Kubernetes components...
I0223 05:09:12.745627 25649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 05:09:12.752188 25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
I0223 05:09:12.752607 25649 main.go:141] libmachine: () Calling .GetVersion
I0223 05:09:12.753170 25649 main.go:141] libmachine: Using API Version 1
I0223 05:09:12.753194 25649 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:09:12.753534 25649 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:09:12.753727 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetState
I0223 05:09:12.755629 25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
I0223 05:09:12.756000 25649 main.go:141] libmachine: () Calling .GetVersion
I0223 05:09:12.756458 25649 main.go:141] libmachine: Using API Version 1
I0223 05:09:12.756486 25649 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:09:12.756453 25649 kapi.go:59] client config for test-preload-113143: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 05:09:12.756839 25649 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:09:12.757429 25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0223 05:09:12.757474 25649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:09:12.769453 25649 addons.go:227] Setting addon default-storageclass=true in "test-preload-113143"
W0223 05:09:12.769472 25649 addons.go:236] addon default-storageclass should already be in state true
I0223 05:09:12.769496 25649 host.go:66] Checking if "test-preload-113143" exists ...
I0223 05:09:12.769860 25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0223 05:09:12.769915 25649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:09:12.771927 25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
I0223 05:09:12.772317 25649 main.go:141] libmachine: () Calling .GetVersion
I0223 05:09:12.772862 25649 main.go:141] libmachine: Using API Version 1
I0223 05:09:12.772890 25649 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:09:12.773219 25649 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:09:12.773424 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetState
I0223 05:09:12.774876 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:09:12.777199 25649 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0223 05:09:12.778819 25649 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0223 05:09:12.778838 25649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0223 05:09:12.778856 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:09:12.782177 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:09:12.782718 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:09:12.782743 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:09:12.782994 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:09:12.783179 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:09:12.783344 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:09:12.783502 25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
I0223 05:09:12.786602 25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
I0223 05:09:12.786945 25649 main.go:141] libmachine: () Calling .GetVersion
I0223 05:09:12.787431 25649 main.go:141] libmachine: Using API Version 1
I0223 05:09:12.787454 25649 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:09:12.787841 25649 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:09:12.788283 25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0223 05:09:12.788319 25649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:09:12.802826 25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
I0223 05:09:12.803246 25649 main.go:141] libmachine: () Calling .GetVersion
I0223 05:09:12.803820 25649 main.go:141] libmachine: Using API Version 1
I0223 05:09:12.803844 25649 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:09:12.804187 25649 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:09:12.804390 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetState
I0223 05:09:12.806151 25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
I0223 05:09:12.806497 25649 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0223 05:09:12.806515 25649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0223 05:09:12.806536 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
I0223 05:09:12.809692 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:09:12.809961 25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
I0223 05:09:12.809991 25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
I0223 05:09:12.810244 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
I0223 05:09:12.810402 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
I0223 05:09:12.810512 25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
I0223 05:09:12.810640 25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
I0223 05:09:12.944134 25649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0223 05:09:12.963210 25649 node_ready.go:35] waiting up to 6m0s for node "test-preload-113143" to be "Ready" ...
I0223 05:09:12.963225 25649 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0223 05:09:12.973424 25649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0223 05:09:13.762152 25649 main.go:141] libmachine: Making call to close driver server
I0223 05:09:13.762183 25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
I0223 05:09:13.762207 25649 main.go:141] libmachine: Making call to close driver server
I0223 05:09:13.762227 25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
I0223 05:09:13.762514 25649 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:09:13.762537 25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
I0223 05:09:13.762547 25649 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:09:13.762557 25649 main.go:141] libmachine: Making call to close driver server
I0223 05:09:13.762561 25649 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:09:13.762566 25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
I0223 05:09:13.762572 25649 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:09:13.762514 25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
I0223 05:09:13.762580 25649 main.go:141] libmachine: Making call to close driver server
I0223 05:09:13.762621 25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
I0223 05:09:13.762791 25649 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:09:13.762811 25649 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:09:13.762795 25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
I0223 05:09:13.762835 25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
I0223 05:09:13.762873 25649 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:09:13.762890 25649 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:09:13.762914 25649 main.go:141] libmachine: Making call to close driver server
I0223 05:09:13.762927 25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
I0223 05:09:13.763214 25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
I0223 05:09:13.763252 25649 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:09:13.763267 25649 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:09:13.765721 25649 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0223 05:09:13.767394 25649 addons.go:492] enable addons completed in 1.031203078s: enabled=[storage-provisioner default-storageclass]
I0223 05:09:14.971052 25649 node_ready.go:58] node "test-preload-113143" has status "Ready":"False"
I0223 05:09:17.470330 25649 node_ready.go:58] node "test-preload-113143" has status "Ready":"False"
I0223 05:09:19.470714 25649 node_ready.go:58] node "test-preload-113143" has status "Ready":"False"
I0223 05:09:21.970214 25649 node_ready.go:49] node "test-preload-113143" has status "Ready":"True"
I0223 05:09:21.970238 25649 node_ready.go:38] duration metric: took 9.006994732s waiting for node "test-preload-113143" to be "Ready" ...
I0223 05:09:21.970246 25649 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:09:21.977729 25649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace to be "Ready" ...
I0223 05:09:23.988963 25649 pod_ready.go:102] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"False"
I0223 05:09:25.989705 25649 pod_ready.go:102] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"False"
I0223 05:09:27.991137 25649 pod_ready.go:102] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"False"
I0223 05:09:29.995646 25649 pod_ready.go:102] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"False"
I0223 05:09:31.989117 25649 pod_ready.go:92] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"True"
I0223 05:09:31.989145 25649 pod_ready.go:81] duration metric: took 10.011389818s waiting for pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace to be "Ready" ...
I0223 05:09:31.989183 25649 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:31.995226 25649 pod_ready.go:92] pod "etcd-test-preload-113143" in "kube-system" namespace has status "Ready":"True"
I0223 05:09:31.995240 25649 pod_ready.go:81] duration metric: took 6.049576ms waiting for pod "etcd-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:31.995248 25649 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:32.004888 25649 pod_ready.go:92] pod "kube-apiserver-test-preload-113143" in "kube-system" namespace has status "Ready":"True"
I0223 05:09:32.004906 25649 pod_ready.go:81] duration metric: took 9.652018ms waiting for pod "kube-apiserver-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:32.004916 25649 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:32.010469 25649 pod_ready.go:92] pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace has status "Ready":"True"
I0223 05:09:32.010491 25649 pod_ready.go:81] duration metric: took 5.567242ms waiting for pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:32.010502 25649 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bq8xz" in "kube-system" namespace to be "Ready" ...
I0223 05:09:32.014813 25649 pod_ready.go:92] pod "kube-proxy-bq8xz" in "kube-system" namespace has status "Ready":"True"
I0223 05:09:32.014833 25649 pod_ready.go:81] duration metric: took 4.323391ms waiting for pod "kube-proxy-bq8xz" in "kube-system" namespace to be "Ready" ...
I0223 05:09:32.014843 25649 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:32.388101 25649 pod_ready.go:92] pod "kube-scheduler-test-preload-113143" in "kube-system" namespace has status "Ready":"True"
I0223 05:09:32.388121 25649 pod_ready.go:81] duration metric: took 373.270122ms waiting for pod "kube-scheduler-test-preload-113143" in "kube-system" namespace to be "Ready" ...
I0223 05:09:32.388131 25649 pod_ready.go:38] duration metric: took 10.417877146s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:09:32.388148 25649 api_server.go:51] waiting for apiserver process to appear ...
I0223 05:09:32.388192 25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:09:32.401795 25649 api_server.go:71] duration metric: took 19.66155846s to wait for apiserver process to appear ...
I0223 05:09:32.401828 25649 api_server.go:87] waiting for apiserver healthz status ...
I0223 05:09:32.401839 25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
I0223 05:09:32.407789 25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 200:
ok
I0223 05:09:32.408596 25649 api_server.go:140] control plane version: v1.24.4
I0223 05:09:32.408612 25649 api_server.go:130] duration metric: took 6.777726ms to wait for apiserver health ...
I0223 05:09:32.408621 25649 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 05:09:32.591210 25649 system_pods.go:59] 7 kube-system pods found
I0223 05:09:32.591235 25649 system_pods.go:61] "coredns-6d4b75cb6d-mmpvt" [3928e1dc-58bd-434f-bc29-8c20afb5e112] Running
I0223 05:09:32.591240 25649 system_pods.go:61] "etcd-test-preload-113143" [65f0e6f1-4ff2-49bd-9f2f-58967808df14] Running
I0223 05:09:32.591251 25649 system_pods.go:61] "kube-apiserver-test-preload-113143" [e28969a2-5979-483e-bd07-658187cffae5] Running
I0223 05:09:32.591255 25649 system_pods.go:61] "kube-controller-manager-test-preload-113143" [055f8ab8-0181-4121-8993-88d236e645c4] Running
I0223 05:09:32.591259 25649 system_pods.go:61] "kube-proxy-bq8xz" [b957cd83-fc56-48cc-a924-775e7a3ad79f] Running
I0223 05:09:32.591263 25649 system_pods.go:61] "kube-scheduler-test-preload-113143" [901702d4-f84c-4418-a3df-ea323600a55d] Running
I0223 05:09:32.591269 25649 system_pods.go:61] "storage-provisioner" [a4976d12-2647-4fa6-8366-5d94a2155a2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0223 05:09:32.591274 25649 system_pods.go:74] duration metric: took 182.648658ms to wait for pod list to return data ...
I0223 05:09:32.591280 25649 default_sa.go:34] waiting for default service account to be created ...
I0223 05:09:32.787703 25649 default_sa.go:45] found service account: "default"
I0223 05:09:32.787725 25649 default_sa.go:55] duration metric: took 196.440351ms for default service account to be created ...
I0223 05:09:32.787732 25649 system_pods.go:116] waiting for k8s-apps to be running ...
I0223 05:09:32.990191 25649 system_pods.go:86] 7 kube-system pods found
I0223 05:09:32.990226 25649 system_pods.go:89] "coredns-6d4b75cb6d-mmpvt" [3928e1dc-58bd-434f-bc29-8c20afb5e112] Running
I0223 05:09:32.990234 25649 system_pods.go:89] "etcd-test-preload-113143" [65f0e6f1-4ff2-49bd-9f2f-58967808df14] Running
I0223 05:09:32.990240 25649 system_pods.go:89] "kube-apiserver-test-preload-113143" [e28969a2-5979-483e-bd07-658187cffae5] Running
I0223 05:09:32.990247 25649 system_pods.go:89] "kube-controller-manager-test-preload-113143" [055f8ab8-0181-4121-8993-88d236e645c4] Running
I0223 05:09:32.990253 25649 system_pods.go:89] "kube-proxy-bq8xz" [b957cd83-fc56-48cc-a924-775e7a3ad79f] Running
I0223 05:09:32.990259 25649 system_pods.go:89] "kube-scheduler-test-preload-113143" [901702d4-f84c-4418-a3df-ea323600a55d] Running
I0223 05:09:32.990277 25649 system_pods.go:89] "storage-provisioner" [a4976d12-2647-4fa6-8366-5d94a2155a2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0223 05:09:32.990285 25649 system_pods.go:126] duration metric: took 202.549103ms to wait for k8s-apps to be running ...
I0223 05:09:32.990294 25649 system_svc.go:44] waiting for kubelet service to be running ....
I0223 05:09:32.990341 25649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 05:09:33.004902 25649 system_svc.go:56] duration metric: took 14.585277ms WaitForService to wait for kubelet.
I0223 05:09:33.004933 25649 kubeadm.go:578] duration metric: took 20.264701087s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0223 05:09:33.004986 25649 node_conditions.go:102] verifying NodePressure condition ...
I0223 05:09:33.187466 25649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 05:09:33.187493 25649 node_conditions.go:123] node cpu capacity is 2
I0223 05:09:33.187503 25649 node_conditions.go:105] duration metric: took 182.51139ms to run NodePressure ...
I0223 05:09:33.187514 25649 start.go:228] waiting for startup goroutines ...
I0223 05:09:33.187520 25649 start.go:233] waiting for cluster config update ...
I0223 05:09:33.187529 25649 start.go:242] writing updated cluster config ...
I0223 05:09:33.187784 25649 ssh_runner.go:195] Run: rm -f paused
I0223 05:09:33.236840 25649 start.go:555] kubectl: 1.26.1, cluster: 1.24.4 (minor skew: 2)
I0223 05:09:33.239413 25649 out.go:177]
W0223 05:09:33.241191 25649 out.go:239] ! /usr/local/bin/kubectl is version 1.26.1, which may have incompatibilities with Kubernetes 1.24.4.
I0223 05:09:33.242920 25649 out.go:177] - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
I0223 05:09:33.244772 25649 out.go:177] * Done! kubectl is now configured to use "test-preload-113143" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
e9086c130faaa a4ca41631cc7a 3 seconds ago Running coredns 1 272567bfc1eee
c07d53a7b0d73 7a53d1e08ef58 9 seconds ago Running kube-proxy 1 287a18f017a43
2863db25cf066 1f99cb6da9a82 20 seconds ago Running kube-controller-manager 2 627b3ddfcac38
4be127efeda99 6cab9d1bed1be 28 seconds ago Running kube-apiserver 2 2f2af25be8b93
e67313b9c90e5 1f99cb6da9a82 42 seconds ago Exited kube-controller-manager 1 627b3ddfcac38
82f70c263d12e aebe758cef4cd 52 seconds ago Running etcd 1 b6220acccd7ca
263c6e12a3a71 03fa22539fc1c 53 seconds ago Running kube-scheduler 1 73c4a3a4e580f
26d6d8b7f66e2 6cab9d1bed1be About a minute ago Exited kube-apiserver 1 2f2af25be8b93
*
* ==> containerd <==
* -- Journal begins at Thu 2023-02-23 05:07:52 UTC, ends at Thu 2023-02-23 05:09:34 UTC. --
Feb 23 05:09:23 test-preload-113143 containerd[628]: time="2023-02-23T05:09:23.755078933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:09:23 test-preload-113143 containerd[628]: time="2023-02-23T05:09:23.755255754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:09:23 test-preload-113143 containerd[628]: time="2023-02-23T05:09:23.755267326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:09:23 test-preload-113143 containerd[628]: time="2023-02-23T05:09:23.755558286Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e pid=1441 runtime=io.containerd.runc.v2
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.091404305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:a4976d12-2647-4fa6-8366-5d94a2155a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\""
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.100851276Z" level=info msg="CreateContainer within sandbox \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.112050594Z" level=error msg="CreateContainer within sandbox \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1485309488 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists"
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.723310673Z" level=info msg="CreateContainer within sandbox \"287a18f017a433c1fe40b39903b974b13f44b42c45101dc30e45325666af8e0b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.757095296Z" level=info msg="CreateContainer within sandbox \"287a18f017a433c1fe40b39903b974b13f44b42c45101dc30e45325666af8e0b\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"c07d53a7b0d7358ada22b20b8e047addfdd2a5d8ca0fe53c9350ce007c00fb6b\""
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.761461531Z" level=info msg="StartContainer for \"c07d53a7b0d7358ada22b20b8e047addfdd2a5d8ca0fe53c9350ce007c00fb6b\""
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.854428630Z" level=info msg="StartContainer for \"c07d53a7b0d7358ada22b20b8e047addfdd2a5d8ca0fe53c9350ce007c00fb6b\" returns successfully"
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.911519099Z" level=info msg="CreateContainer within sandbox \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.938680061Z" level=error msg="CreateContainer within sandbox \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3011273267 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists"
Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.723579450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-mmpvt,Uid:3928e1dc-58bd-434f-bc29-8c20afb5e112,Namespace:kube-system,Attempt:0,}"
Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.826844335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.826899452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.826908802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.827317117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63 pid=1647 runtime=io.containerd.runc.v2
Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.160067894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-mmpvt,Uid:3928e1dc-58bd-434f-bc29-8c20afb5e112,Namespace:kube-system,Attempt:0,} returns sandbox id \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\""
Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.167193734Z" level=info msg="CreateContainer within sandbox \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.198776943Z" level=error msg="CreateContainer within sandbox \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\" for &ContainerMetadata{Name:coredns,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1946538601 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists"
Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.923195445Z" level=info msg="CreateContainer within sandbox \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.970962111Z" level=info msg="CreateContainer within sandbox \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e9086c130faaa5794e1ea4eb2ac50d3af5376fea2c565b0e99be7b6e81bb608a\""
Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.972234600Z" level=info msg="StartContainer for \"e9086c130faaa5794e1ea4eb2ac50d3af5376fea2c565b0e99be7b6e81bb608a\""
Feb 23 05:09:31 test-preload-113143 containerd[628]: time="2023-02-23T05:09:31.061541291Z" level=info msg="StartContainer for \"e9086c130faaa5794e1ea4eb2ac50d3af5376fea2c565b0e99be7b6e81bb608a\" returns successfully"
*
* ==> coredns [e9086c130faaa5794e1ea4eb2ac50d3af5376fea2c565b0e99be7b6e81bb608a] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] 127.0.0.1:36477 - 25196 "HINFO IN 1800509997243044346.4330803314511940720. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006843638s
*
* ==> describe nodes <==
* Name: test-preload-113143
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-113143
kubernetes.io/os=linux
minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321
minikube.k8s.io/name=test-preload-113143
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_02_23T05_04_47_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Feb 2023 05:04:44 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-113143
AcquireTime: <unset>
RenewTime: Thu, 23 Feb 2023 05:09:32 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 23 Feb 2023 05:09:21 +0000 Thu, 23 Feb 2023 05:04:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 23 Feb 2023 05:09:21 +0000 Thu, 23 Feb 2023 05:04:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 23 Feb 2023 05:09:21 +0000 Thu, 23 Feb 2023 05:04:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 23 Feb 2023 05:09:21 +0000 Thu, 23 Feb 2023 05:09:21 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.53
Hostname: test-preload-113143
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 5394e64b7b5e4495aba69ae1cd40df43
System UUID: 5394e64b-7b5e-4495-aba6-9ae1cd40df43
Boot ID: 84658728-e2dc-4b20-b2b5-55b270763021
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.15
Kubelet Version: v1.24.4
Kube-Proxy Version: v1.24.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-mmpvt 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 4m34s
kube-system etcd-test-preload-113143 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 4m49s
kube-system kube-apiserver-test-preload-113143 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m46s
kube-system kube-controller-manager-test-preload-113143 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m47s
kube-system kube-proxy-bq8xz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m34s
kube-system kube-scheduler-test-preload-113143 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m46s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m32s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 9s kube-proxy
Normal Starting 4m31s kube-proxy
Normal NodeHasSufficientMemory 4m56s (x5 over 4m56s) kubelet Node test-preload-113143 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m56s (x5 over 4m56s) kubelet Node test-preload-113143 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m56s (x4 over 4m56s) kubelet Node test-preload-113143 status is now: NodeHasSufficientPID
Normal Starting 4m47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m47s kubelet Node test-preload-113143 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m47s kubelet Node test-preload-113143 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m47s kubelet Node test-preload-113143 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m46s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m36s kubelet Node test-preload-113143 status is now: NodeReady
Normal RegisteredNode 4m35s node-controller Node test-preload-113143 event: Registered Node test-preload-113143 in Controller
Normal Starting 69s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 69s (x8 over 69s) kubelet Node test-preload-113143 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 69s (x8 over 69s) kubelet Node test-preload-113143 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 69s (x7 over 69s) kubelet Node test-preload-113143 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 69s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 9s node-controller Node test-preload-113143 event: Registered Node test-preload-113143 in Controller
*
* ==> dmesg <==
* [Feb23 05:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.072142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.962902] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.184322] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.150664] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.469081] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[Feb23 05:08] systemd-fstab-generator[527]: Ignoring "noauto" for root device
[ +2.811503] systemd-fstab-generator[556]: Ignoring "noauto" for root device
[ +0.102720] systemd-fstab-generator[567]: Ignoring "noauto" for root device
[ +0.129947] systemd-fstab-generator[580]: Ignoring "noauto" for root device
[ +0.095532] systemd-fstab-generator[591]: Ignoring "noauto" for root device
[ +0.232107] systemd-fstab-generator[618]: Ignoring "noauto" for root device
[ +13.433338] systemd-fstab-generator[814]: Ignoring "noauto" for root device
[Feb23 05:09] kauditd_printk_skb: 7 callbacks suppressed
[ +5.997888] kauditd_printk_skb: 15 callbacks suppressed
*
* ==> etcd [82f70c263d12e8535727b6f70dc9c136781cb5d47e181a272699b4d324d859b8] <==
* {"level":"info","ts":"2023-02-23T05:08:42.289Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8389b8f6c4f004d4","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-02-23T05:08:42.289Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-02-23T05:08:42.291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 switched to configuration voters=(9478310260783449300)"}
{"level":"info","ts":"2023-02-23T05:08:42.291Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1138cde6dcc1ce27","local-member-id":"8389b8f6c4f004d4","added-peer-id":"8389b8f6c4f004d4","added-peer-peer-urls":["https://192.168.39.53:2380"]}
{"level":"info","ts":"2023-02-23T05:08:42.291Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1138cde6dcc1ce27","local-member-id":"8389b8f6c4f004d4","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T05:08:42.291Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T05:08:42.293Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-23T05:08:42.294Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8389b8f6c4f004d4","initial-advertise-peer-urls":["https://192.168.39.53:2380"],"listen-peer-urls":["https://192.168.39.53:2380"],"advertise-client-urls":["https://192.168.39.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-02-23T05:08:42.294Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-02-23T05:08:42.295Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.53:2380"}
{"level":"info","ts":"2023-02-23T05:08:42.295Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.53:2380"}
{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 is starting a new election at term 2"}
{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became pre-candidate at term 2"}
{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 received MsgPreVoteResp from 8389b8f6c4f004d4 at term 2"}
{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became candidate at term 3"}
{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 received MsgVoteResp from 8389b8f6c4f004d4 at term 3"}
{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became leader at term 3"}
{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8389b8f6c4f004d4 elected leader 8389b8f6c4f004d4 at term 3"}
{"level":"info","ts":"2023-02-23T05:08:43.270Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8389b8f6c4f004d4","local-member-attributes":"{Name:test-preload-113143 ClientURLs:[https://192.168.39.53:2379]}","request-path":"/0/members/8389b8f6c4f004d4/attributes","cluster-id":"1138cde6dcc1ce27","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-23T05:08:43.270Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:08:43.271Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:08:43.273Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-02-23T05:08:43.273Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.53:2379"}
{"level":"info","ts":"2023-02-23T05:08:43.273Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-23T05:08:43.273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 05:09:34 up 1 min, 0 users, load average: 0.84, 0.25, 0.09
Linux test-preload-113143 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [26d6d8b7f66e2c3a37e7b3201a453b0ba3e5427b0490eae75d7645d3d5c0173a] <==
* I0223 05:08:27.016247 1 server.go:558] external host was not specified, using 192.168.39.53
I0223 05:08:27.017132 1 server.go:158] Version: v1.24.4
I0223 05:08:27.017181 1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 05:08:27.235513 1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
I0223 05:08:27.236422 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0223 05:08:27.236435 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0223 05:08:27.237415 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0223 05:08:27.237426 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0223 05:08:27.240466 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:28.235811 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:28.241663 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:29.236633 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:29.601245 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:30.673856 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:31.737307 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:32.945880 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:36.146408 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:37.760454 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0223 05:08:42.229113 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
E0223 05:08:47.240471 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [4be127efeda9951812d50daf33349c99ea494ca4adc427fd492a0bca8b26b5c2] <==
* I0223 05:09:09.021822 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0223 05:09:09.021841 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0223 05:09:09.021853 1 crd_finalizer.go:266] Starting CRDFinalizer
I0223 05:09:09.021872 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0223 05:09:09.021875 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0223 05:09:09.022388 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0223 05:09:09.023024 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0223 05:09:09.114707 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0223 05:09:09.117626 1 cache.go:39] Caches are synced for autoregister controller
I0223 05:09:09.117670 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0223 05:09:09.117687 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0223 05:09:09.118172 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0223 05:09:09.123397 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0223 05:09:09.145333 1 shared_informer.go:262] Caches are synced for node_authorizer
I0223 05:09:09.717045 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0223 05:09:10.025826 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0223 05:09:10.844552 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0223 05:09:10.861472 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0223 05:09:10.906497 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0223 05:09:10.925379 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0223 05:09:10.935430 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0223 05:09:11.862953 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0223 05:09:25.072880 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0223 05:09:25.584900 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0223 05:09:25.864663 1 controller.go:611] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [2863db25cf0664dcd0d086d52797107b2f30c8801252b161e47992f395dd65b7] <==
* W0223 05:09:25.625582 1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-113143. Assuming now as a timestamp.
I0223 05:09:25.625617 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0223 05:09:25.626099 1 event.go:294] "Event occurred" object="test-preload-113143" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-113143 event: Registered Node test-preload-113143 in Controller"
I0223 05:09:25.628887 1 shared_informer.go:262] Caches are synced for GC
I0223 05:09:25.631089 1 shared_informer.go:262] Caches are synced for stateful set
I0223 05:09:25.637084 1 shared_informer.go:262] Caches are synced for expand
I0223 05:09:25.639520 1 shared_informer.go:262] Caches are synced for crt configmap
I0223 05:09:25.644092 1 shared_informer.go:262] Caches are synced for TTL
I0223 05:09:25.650235 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I0223 05:09:25.652489 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0223 05:09:25.652670 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I0223 05:09:25.653931 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I0223 05:09:25.683957 1 shared_informer.go:262] Caches are synced for cronjob
I0223 05:09:25.703441 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0223 05:09:25.721506 1 shared_informer.go:262] Caches are synced for disruption
I0223 05:09:25.721520 1 disruption.go:371] Sending events to api server.
I0223 05:09:25.727281 1 shared_informer.go:262] Caches are synced for deployment
I0223 05:09:25.835255 1 shared_informer.go:262] Caches are synced for attach detach
I0223 05:09:25.835724 1 shared_informer.go:262] Caches are synced for resource quota
I0223 05:09:25.846506 1 shared_informer.go:262] Caches are synced for endpoint
I0223 05:09:25.849829 1 shared_informer.go:262] Caches are synced for resource quota
I0223 05:09:25.883403 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0223 05:09:26.263251 1 shared_informer.go:262] Caches are synced for garbage collector
I0223 05:09:26.263299 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0223 05:09:26.273864 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-controller-manager [e67313b9c90e5d35bc1c2a085135b0289a2017c7223f431be4468d304173ee69] <==
* vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x2f6
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run.func1()
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:165 +0x3c
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x3931a60?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x4d010e0, 0xc000e4d530}, 0x1, 0xc000446900)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0xdf8475800, 0x0, 0xa0?, 0xc00006efd0?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x4d2abb0?, 0xc0005a0a40?, 0xc0007ebda0?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:164 +0x372
goroutine 141 [syscall]:
syscall.Syscall6(0xe8, 0xd, 0xc000f2fc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
/usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5
k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0x0?, {0xc000f2fc14?, 0x0?, 0x0?}, 0x0?)
vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:56 +0x58
k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000dbb420)
vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d
k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0005b8a00)
vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x26e
created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c5
*
* ==> kube-proxy [c07d53a7b0d7358ada22b20b8e047addfdd2a5d8ca0fe53c9350ce007c00fb6b] <==
* I0223 05:09:25.012755 1 node.go:163] Successfully retrieved node IP: 192.168.39.53
I0223 05:09:25.012832 1 server_others.go:138] "Detected node IP" address="192.168.39.53"
I0223 05:09:25.012918 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0223 05:09:25.060338 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0223 05:09:25.060380 1 server_others.go:206] "Using iptables Proxier"
I0223 05:09:25.061135 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0223 05:09:25.063287 1 server.go:661] "Version info" version="v1.24.4"
I0223 05:09:25.063325 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 05:09:25.064651 1 config.go:317] "Starting service config controller"
I0223 05:09:25.065679 1 shared_informer.go:255] Waiting for caches to sync for service config
I0223 05:09:25.065732 1 config.go:226] "Starting endpoint slice config controller"
I0223 05:09:25.065738 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0223 05:09:25.069621 1 config.go:444] "Starting node config controller"
I0223 05:09:25.069721 1 shared_informer.go:255] Waiting for caches to sync for node config
I0223 05:09:25.166063 1 shared_informer.go:262] Caches are synced for service config
I0223 05:09:25.166081 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0223 05:09:25.170668 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [263c6e12a3a71fb6508ca375613a7bcb8e65a1da59d0a00449930d6b59deab8d] <==
* W0223 05:09:04.407455 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.53:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
E0223 05:09:04.407500 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.53:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
W0223 05:09:04.541907 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.39.53:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
E0223 05:09:04.541945 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.53:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
W0223 05:09:04.554894 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.53:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
E0223 05:09:04.554933 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.53:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
W0223 05:09:05.650504 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.39.53:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
E0223 05:09:05.650536 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.53:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
W0223 05:09:05.872695 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.39.53:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
E0223 05:09:05.872922 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.53:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
W0223 05:09:05.916585 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.39.53:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
E0223 05:09:05.916862 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.53:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
W0223 05:09:09.050756 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0223 05:09:09.050813 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0223 05:09:09.051222 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0223 05:09:09.051260 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0223 05:09:09.052159 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0223 05:09:09.052203 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0223 05:09:09.052465 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0223 05:09:09.052502 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0223 05:09:09.052720 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0223 05:09:09.052756 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0223 05:09:09.055114 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0223 05:09:09.055155 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0223 05:09:28.061866 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Thu 2023-02-23 05:07:52 UTC, ends at Thu 2023-02-23 05:09:34 UTC. --
Feb 23 05:09:10 test-preload-113143 kubelet[820]: E0223 05:09:10.850838 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-bq8xz" podUID=b957cd83-fc56-48cc-a924-775e7a3ad79f
Feb 23 05:09:10 test-preload-113143 kubelet[820]: I0223 05:09:10.856159 820 kubelet_node_status.go:70] "Attempting to register node" node="test-preload-113143"
Feb 23 05:09:11 test-preload-113143 kubelet[820]: E0223 05:09:11.342591 820 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Feb 23 05:09:11 test-preload-113143 kubelet[820]: E0223 05:09:11.343233 820 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3928e1dc-58bd-434f-bc29-8c20afb5e112-config-volume podName:3928e1dc-58bd-434f-bc29-8c20afb5e112 nodeName:}" failed. No retries permitted until 2023-02-23 05:09:13.343152384 +0000 UTC m=+47.874964728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3928e1dc-58bd-434f-bc29-8c20afb5e112-config-volume") pod "coredns-6d4b75cb6d-mmpvt" (UID: "3928e1dc-58bd-434f-bc29-8c20afb5e112") : object "kube-system"/"coredns" not registered
Feb 23 05:09:11 test-preload-113143 kubelet[820]: I0223 05:09:11.627920 820 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-113143"
Feb 23 05:09:11 test-preload-113143 kubelet[820]: I0223 05:09:11.628110 820 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-113143"
Feb 23 05:09:11 test-preload-113143 kubelet[820]: I0223 05:09:11.630387 820 setters.go:532] "Node became not ready" node="test-preload-113143" condition={Type:Ready Status:False LastHeartbeatTime:2023-02-23 05:09:11.630324888 +0000 UTC m=+46.162137226 LastTransitionTime:2023-02-23 05:09:11.630324888 +0000 UTC m=+46.162137226 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
Feb 23 05:09:11 test-preload-113143 kubelet[820]: E0223 05:09:11.722375 820 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-mmpvt" podUID=3928e1dc-58bd-434f-bc29-8c20afb5e112
Feb 23 05:09:13 test-preload-113143 kubelet[820]: E0223 05:09:13.358379 820 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Feb 23 05:09:13 test-preload-113143 kubelet[820]: E0223 05:09:13.358582 820 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3928e1dc-58bd-434f-bc29-8c20afb5e112-config-volume podName:3928e1dc-58bd-434f-bc29-8c20afb5e112 nodeName:}" failed. No retries permitted until 2023-02-23 05:09:17.358503581 +0000 UTC m=+51.890315907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3928e1dc-58bd-434f-bc29-8c20afb5e112-config-volume") pod "coredns-6d4b75cb6d-mmpvt" (UID: "3928e1dc-58bd-434f-bc29-8c20afb5e112") : object "kube-system"/"coredns" not registered
Feb 23 05:09:13 test-preload-113143 kubelet[820]: E0223 05:09:13.724198 820 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-mmpvt" podUID=3928e1dc-58bd-434f-bc29-8c20afb5e112
Feb 23 05:09:13 test-preload-113143 kubelet[820]: I0223 05:09:13.799966 820 scope.go:110] "RemoveContainer" containerID="e67313b9c90e5d35bc1c2a085135b0289a2017c7223f431be4468d304173ee69"
Feb 23 05:09:17 test-preload-113143 kubelet[820]: E0223 05:09:17.549730 820 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1713846745 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists"
Feb 23 05:09:17 test-preload-113143 kubelet[820]: E0223 05:09:17.550179 820 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1713846745 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists" pod="kube-system/coredns-6d4b75cb6d-mmpvt"
Feb 23 05:09:17 test-preload-113143 kubelet[820]: E0223 05:09:17.550242 820 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1713846745 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists" pod="kube-system/coredns-6d4b75cb6d-mmpvt"
Feb 23 05:09:17 test-preload-113143 kubelet[820]: E0223 05:09:17.550459 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-mmpvt_kube-system(3928e1dc-58bd-434f-bc29-8c20afb5e112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-mmpvt_kube-system(3928e1dc-58bd-434f-bc29-8c20afb5e112)\\\": rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1713846745 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists\"" pod="kube-system/coredns-6d4b75cb6d-mmpvt" podUID=3928e1dc-58bd-434f-bc29-8c20afb5e112
Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.112458 820 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1485309488 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists" podSandboxID="76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e"
Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.112559 820 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qscbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(a4976d12-2647-4fa6-8366-
5d94a2155a2f): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1485309488 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists
Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.112591 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1485309488 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists\"" pod="kube-system/storage-provisioner" podUID=a4976d12-2647-4fa6-8366-5d94a2155a2f
Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.939311 820 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3011273267 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists" podSandboxID="76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e"
Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.939481 820 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qscbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(a4976d12-2647-4fa6-8366-
5d94a2155a2f): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3011273267 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists
Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.939813 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3011273267 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists\"" pod="kube-system/storage-provisioner" podUID=a4976d12-2647-4fa6-8366-5d94a2155a2f
Feb 23 05:09:30 test-preload-113143 kubelet[820]: E0223 05:09:30.199204 820 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1946538601 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists" podSandboxID="272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63"
Feb 23 05:09:30 test-preload-113143 kubelet[820]: E0223 05:09:30.199381 820 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d8mjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-mmpvt_kube-system(3928e1dc-58bd-434f-bc29-8c20afb5e112): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1946538601 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists
Feb 23 05:09:30 test-preload-113143 kubelet[820]: E0223 05:09:30.199421 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1946538601 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists\"" pod="kube-system/coredns-6d4b75cb6d-mmpvt" podUID=3928e1dc-58bd-434f-bc29-8c20afb5e112
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-113143 -n test-preload-113143
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-113143 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-113143" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-113143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-113143: (1.183086398s)
--- FAIL: TestPreload (357.55s)