=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-034636 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0224 23:00:26.382048 639735 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/functional-648465/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-034636 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (2m3.82187345s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-034636 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-034636 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.814699797s)
preload_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p test-preload-034636
E0224 23:02:35.566956 639735 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/addons-674897/client.crt: no such file or directory
E0224 23:03:07.933782 639735 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/ingress-addon-legacy-185956/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-034636: (1m31.6535766s)
preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-034636 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd
E0224 23:04:32.516711 639735 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/addons-674897/client.crt: no such file or directory
E0224 23:05:26.381074 639735 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/functional-648465/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-034636 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd: (2m46.552906737s)
preload_test.go:80: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-034636 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.4 6cab9d1bed1be 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.4 1f99cb6da9a82 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-scheduler v1.24.4 03fa22539fc1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-02-24 23:06:10.613522868 +0000 UTC m=+2692.984297923
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-034636 -n test-preload-034636
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-034636 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-034636 logs -n 25: (1.198273439s)
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-360439 ssh -n | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:42 UTC | 24 Feb 23 22:42 UTC |
| | multinode-360439-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-360439 ssh -n multinode-360439 sudo cat | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:42 UTC | 24 Feb 23 22:42 UTC |
| | /home/docker/cp-test_multinode-360439-m03_multinode-360439.txt | | | | | |
| cp | multinode-360439 cp multinode-360439-m03:/home/docker/cp-test.txt | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:42 UTC | 24 Feb 23 22:42 UTC |
| | multinode-360439-m02:/home/docker/cp-test_multinode-360439-m03_multinode-360439-m02.txt | | | | | |
| ssh | multinode-360439 ssh -n | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:42 UTC | 24 Feb 23 22:42 UTC |
| | multinode-360439-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-360439 ssh -n multinode-360439-m02 sudo cat | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:42 UTC | 24 Feb 23 22:42 UTC |
| | /home/docker/cp-test_multinode-360439-m03_multinode-360439-m02.txt | | | | | |
| node | multinode-360439 node stop m03 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:42 UTC | 24 Feb 23 22:42 UTC |
| node | multinode-360439 node start | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:42 UTC | 24 Feb 23 22:43 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-360439 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:43 UTC | |
| stop | -p multinode-360439 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:43 UTC | 24 Feb 23 22:46 UTC |
| start | -p multinode-360439 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:46 UTC | 24 Feb 23 22:51 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-360439 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:51 UTC | |
| node | multinode-360439 node delete | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:51 UTC | 24 Feb 23 22:51 UTC |
| | m03 | | | | | |
| stop | multinode-360439 stop | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:51 UTC | 24 Feb 23 22:55 UTC |
| start | -p multinode-360439 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:55 UTC | 24 Feb 23 22:58 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-360439 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:58 UTC | |
| start | -p multinode-360439-m02 | multinode-360439-m02 | jenkins | v1.29.0 | 24 Feb 23 22:58 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-360439-m03 | multinode-360439-m03 | jenkins | v1.29.0 | 24 Feb 23 22:58 UTC | 24 Feb 23 22:59 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-360439 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:59 UTC | |
| delete | -p multinode-360439-m03 | multinode-360439-m03 | jenkins | v1.29.0 | 24 Feb 23 22:59 UTC | 24 Feb 23 22:59 UTC |
| delete | -p multinode-360439 | multinode-360439 | jenkins | v1.29.0 | 24 Feb 23 22:59 UTC | 24 Feb 23 22:59 UTC |
| start | -p test-preload-034636 | test-preload-034636 | jenkins | v1.29.0 | 24 Feb 23 22:59 UTC | 24 Feb 23 23:01 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-034636 | test-preload-034636 | jenkins | v1.29.0 | 24 Feb 23 23:01 UTC | 24 Feb 23 23:01 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| stop | -p test-preload-034636 | test-preload-034636 | jenkins | v1.29.0 | 24 Feb 23 23:01 UTC | 24 Feb 23 23:03 UTC |
| start | -p test-preload-034636 | test-preload-034636 | jenkins | v1.29.0 | 24 Feb 23 23:03 UTC | 24 Feb 23 23:06 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p test-preload-034636 -- sudo | test-preload-034636 | jenkins | v1.29.0 | 24 Feb 23 23:06 UTC | 24 Feb 23 23:06 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/02/24 23:03:23
Running on machine: ubuntu-20-agent
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0224 23:03:23.862148 654330 out.go:296] Setting OutFile to fd 1 ...
I0224 23:03:23.862860 654330 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 23:03:23.862878 654330 out.go:309] Setting ErrFile to fd 2...
I0224 23:03:23.862887 654330 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 23:03:23.863125 654330 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-632653/.minikube/bin
I0224 23:03:23.864027 654330 out.go:303] Setting JSON to false
I0224 23:03:23.865034 654330 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6350,"bootTime":1677273454,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0224 23:03:23.865117 654330 start.go:135] virtualization: kvm guest
I0224 23:03:23.867591 654330 out.go:177] * [test-preload-034636] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0224 23:03:23.869833 654330 out.go:177] - MINIKUBE_LOCATION=15909
I0224 23:03:23.869798 654330 notify.go:220] Checking for updates...
I0224 23:03:23.871663 654330 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0224 23:03:23.873454 654330 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15909-632653/kubeconfig
I0224 23:03:23.875058 654330 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-632653/.minikube
I0224 23:03:23.876691 654330 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0224 23:03:23.878594 654330 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0224 23:03:23.880788 654330 config.go:182] Loaded profile config "test-preload-034636": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0224 23:03:23.881204 654330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0224 23:03:23.881284 654330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 23:03:23.896313 654330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
I0224 23:03:23.896730 654330 main.go:141] libmachine: () Calling .GetVersion
I0224 23:03:23.897363 654330 main.go:141] libmachine: Using API Version 1
I0224 23:03:23.897387 654330 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 23:03:23.897769 654330 main.go:141] libmachine: () Calling .GetMachineName
I0224 23:03:23.898022 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:23.900299 654330 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
I0224 23:03:23.902086 654330 driver.go:365] Setting default libvirt URI to qemu:///system
I0224 23:03:23.902444 654330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0224 23:03:23.902525 654330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 23:03:23.917229 654330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35899
I0224 23:03:23.917724 654330 main.go:141] libmachine: () Calling .GetVersion
I0224 23:03:23.918348 654330 main.go:141] libmachine: Using API Version 1
I0224 23:03:23.918383 654330 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 23:03:23.918813 654330 main.go:141] libmachine: () Calling .GetMachineName
I0224 23:03:23.919036 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:23.956393 654330 out.go:177] * Using the kvm2 driver based on existing profile
I0224 23:03:23.958091 654330 start.go:296] selected driver: kvm2
I0224 23:03:23.958116 654330 start.go:857] validating driver "kvm2" against &{Name:test-preload-034636 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-034636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0224 23:03:23.958280 654330 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0224 23:03:23.958933 654330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 23:03:23.959017 654330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-632653/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0224 23:03:23.975126 654330 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0224 23:03:23.975467 654330 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0224 23:03:23.975503 654330 cni.go:84] Creating CNI manager for ""
I0224 23:03:23.975517 654330 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0224 23:03:23.975527 654330 start_flags.go:319] config:
{Name:test-preload-034636 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-034636 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0224 23:03:23.975639 654330 iso.go:125] acquiring lock: {Name:mk668538b8e60d9387046d0eb0549045f5720f74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 23:03:23.978044 654330 out.go:177] * Starting control plane node test-preload-034636 in cluster test-preload-034636
I0224 23:03:23.979771 654330 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0224 23:03:24.399312 654330 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0224 23:03:24.399374 654330 cache.go:57] Caching tarball of preloaded images
I0224 23:03:24.399599 654330 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0224 23:03:24.402155 654330 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0224 23:03:24.404029 654330 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0224 23:03:24.512278 654330 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15909-632653/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0224 23:03:36.409787 654330 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0224 23:03:36.409892 654330 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-632653/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0224 23:03:37.286951 654330 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0224 23:03:37.287108 654330 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/config.json ...
I0224 23:03:37.287333 654330 cache.go:193] Successfully downloaded all kic artifacts
I0224 23:03:37.287366 654330 start.go:364] acquiring machines lock for test-preload-034636: {Name:mkdafbae58ff0ad20467701ce5fc311026c8d0ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0224 23:03:37.287436 654330 start.go:368] acquired machines lock for "test-preload-034636" in 49.964µs
I0224 23:03:37.287458 654330 start.go:96] Skipping create...Using existing machine configuration
I0224 23:03:37.287465 654330 fix.go:55] fixHost starting:
I0224 23:03:37.287792 654330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0224 23:03:37.287841 654330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 23:03:37.302926 654330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
I0224 23:03:37.303343 654330 main.go:141] libmachine: () Calling .GetVersion
I0224 23:03:37.303809 654330 main.go:141] libmachine: Using API Version 1
I0224 23:03:37.303831 654330 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 23:03:37.304131 654330 main.go:141] libmachine: () Calling .GetMachineName
I0224 23:03:37.304296 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:37.304453 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetState
I0224 23:03:37.305925 654330 fix.go:103] recreateIfNeeded on test-preload-034636: state=Stopped err=<nil>
I0224 23:03:37.305954 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
W0224 23:03:37.306124 654330 fix.go:129] unexpected machine state, will restart: <nil>
I0224 23:03:37.308980 654330 out.go:177] * Restarting existing kvm2 VM for "test-preload-034636" ...
I0224 23:03:37.310739 654330 main.go:141] libmachine: (test-preload-034636) Calling .Start
I0224 23:03:37.310998 654330 main.go:141] libmachine: (test-preload-034636) Ensuring networks are active...
I0224 23:03:37.311965 654330 main.go:141] libmachine: (test-preload-034636) Ensuring network default is active
I0224 23:03:37.312316 654330 main.go:141] libmachine: (test-preload-034636) Ensuring network mk-test-preload-034636 is active
I0224 23:03:37.312618 654330 main.go:141] libmachine: (test-preload-034636) Getting domain xml...
I0224 23:03:37.313279 654330 main.go:141] libmachine: (test-preload-034636) Creating domain...
I0224 23:03:38.633815 654330 main.go:141] libmachine: (test-preload-034636) Waiting to get IP...
I0224 23:03:38.635086 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:38.635782 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:38.635889 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:38.635731 654365 retry.go:31] will retry after 217.779205ms: waiting for machine to come up
I0224 23:03:38.855608 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:38.856113 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:38.856133 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:38.856081 654365 retry.go:31] will retry after 296.07878ms: waiting for machine to come up
I0224 23:03:39.153658 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:39.154102 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:39.154132 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:39.154051 654365 retry.go:31] will retry after 416.466192ms: waiting for machine to come up
I0224 23:03:39.571685 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:39.572077 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:39.572119 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:39.572050 654365 retry.go:31] will retry after 589.284665ms: waiting for machine to come up
I0224 23:03:40.163048 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:40.163532 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:40.163566 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:40.163476 654365 retry.go:31] will retry after 488.977794ms: waiting for machine to come up
I0224 23:03:40.654147 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:40.654534 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:40.654564 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:40.654480 654365 retry.go:31] will retry after 857.053006ms: waiting for machine to come up
I0224 23:03:41.513627 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:41.514118 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:41.514143 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:41.514048 654365 retry.go:31] will retry after 815.971754ms: waiting for machine to come up
I0224 23:03:42.331858 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:42.332308 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:42.332334 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:42.332266 654365 retry.go:31] will retry after 1.305252126s: waiting for machine to come up
I0224 23:03:43.640064 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:43.640616 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:43.640644 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:43.640562 654365 retry.go:31] will retry after 1.469599105s: waiting for machine to come up
I0224 23:03:45.112208 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:45.112559 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:45.112585 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:45.112496 654365 retry.go:31] will retry after 1.597481701s: waiting for machine to come up
I0224 23:03:46.712119 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:46.712524 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:46.712557 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:46.712457 654365 retry.go:31] will retry after 2.62095565s: waiting for machine to come up
I0224 23:03:49.335827 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:49.336218 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:49.336249 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:49.336174 654365 retry.go:31] will retry after 2.864006293s: waiting for machine to come up
I0224 23:03:52.203605 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:52.204043 654330 main.go:141] libmachine: (test-preload-034636) DBG | unable to find current IP address of domain test-preload-034636 in network mk-test-preload-034636
I0224 23:03:52.204073 654330 main.go:141] libmachine: (test-preload-034636) DBG | I0224 23:03:52.203977 654365 retry.go:31] will retry after 3.204215674s: waiting for machine to come up
I0224 23:03:55.411960 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.412418 654330 main.go:141] libmachine: (test-preload-034636) Found IP for machine: 192.168.39.247
I0224 23:03:55.412451 654330 main.go:141] libmachine: (test-preload-034636) Reserving static IP address...
I0224 23:03:55.412466 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has current primary IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.412957 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "test-preload-034636", mac: "52:54:00:60:99:04", ip: "192.168.39.247"} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:55.412994 654330 main.go:141] libmachine: (test-preload-034636) DBG | skip adding static IP to network mk-test-preload-034636 - found existing host DHCP lease matching {name: "test-preload-034636", mac: "52:54:00:60:99:04", ip: "192.168.39.247"}
I0224 23:03:55.413006 654330 main.go:141] libmachine: (test-preload-034636) Reserved static IP address: 192.168.39.247
I0224 23:03:55.413021 654330 main.go:141] libmachine: (test-preload-034636) Waiting for SSH to be available...
I0224 23:03:55.413049 654330 main.go:141] libmachine: (test-preload-034636) DBG | Getting to WaitForSSH function...
I0224 23:03:55.415301 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.415599 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:55.415632 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.415825 654330 main.go:141] libmachine: (test-preload-034636) DBG | Using SSH client type: external
I0224 23:03:55.415871 654330 main.go:141] libmachine: (test-preload-034636) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-632653/.minikube/machines/test-preload-034636/id_rsa (-rw-------)
I0224 23:03:55.415907 654330 main.go:141] libmachine: (test-preload-034636) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-632653/.minikube/machines/test-preload-034636/id_rsa -p 22] /usr/bin/ssh <nil>}
I0224 23:03:55.415923 654330 main.go:141] libmachine: (test-preload-034636) DBG | About to run SSH command:
I0224 23:03:55.415938 654330 main.go:141] libmachine: (test-preload-034636) DBG | exit 0
I0224 23:03:55.511140 654330 main.go:141] libmachine: (test-preload-034636) DBG | SSH cmd err, output: <nil>:
I0224 23:03:55.511570 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetConfigRaw
I0224 23:03:55.512410 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetIP
I0224 23:03:55.515022 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.515424 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:55.515452 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.515686 654330 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/config.json ...
I0224 23:03:55.515917 654330 machine.go:88] provisioning docker machine ...
I0224 23:03:55.515939 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:55.516186 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetMachineName
I0224 23:03:55.516390 654330 buildroot.go:166] provisioning hostname "test-preload-034636"
I0224 23:03:55.516410 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetMachineName
I0224 23:03:55.516627 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:03:55.518851 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.519235 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:55.519257 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.519370 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:03:55.519584 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:55.519728 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:55.519846 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:03:55.520059 654330 main.go:141] libmachine: Using SSH client type: native
I0224 23:03:55.520757 654330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.247 22 <nil> <nil>}
I0224 23:03:55.520780 654330 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-034636 && echo "test-preload-034636" | sudo tee /etc/hostname
I0224 23:03:55.655321 654330 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-034636
I0224 23:03:55.655359 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:03:55.658395 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.658753 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:55.658812 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.659048 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:03:55.659315 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:55.659508 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:55.659650 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:03:55.659846 654330 main.go:141] libmachine: Using SSH client type: native
I0224 23:03:55.660315 654330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.247 22 <nil> <nil>}
I0224 23:03:55.660336 654330 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-034636' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-034636/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-034636' | sudo tee -a /etc/hosts;
fi
fi
I0224 23:03:55.791837 654330 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 23:03:55.791874 654330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-632653/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-632653/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-632653/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-632653/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-632653/.minikube}
I0224 23:03:55.791895 654330 buildroot.go:174] setting up certificates
I0224 23:03:55.791956 654330 provision.go:83] configureAuth start
I0224 23:03:55.791995 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetMachineName
I0224 23:03:55.792370 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetIP
I0224 23:03:55.795414 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.795782 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:55.795813 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.796018 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:03:55.798366 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.798712 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:55.798742 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.798860 654330 provision.go:138] copyHostCerts
I0224 23:03:55.798950 654330 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-632653/.minikube/key.pem, removing ...
I0224 23:03:55.798964 654330 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-632653/.minikube/key.pem
I0224 23:03:55.799030 654330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-632653/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-632653/.minikube/key.pem (1679 bytes)
I0224 23:03:55.799143 654330 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-632653/.minikube/ca.pem, removing ...
I0224 23:03:55.799152 654330 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-632653/.minikube/ca.pem
I0224 23:03:55.799177 654330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-632653/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-632653/.minikube/ca.pem (1082 bytes)
I0224 23:03:55.799237 654330 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-632653/.minikube/cert.pem, removing ...
I0224 23:03:55.799244 654330 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-632653/.minikube/cert.pem
I0224 23:03:55.799263 654330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-632653/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-632653/.minikube/cert.pem (1123 bytes)
I0224 23:03:55.799308 654330 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-632653/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-632653/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-632653/.minikube/certs/ca-key.pem org=jenkins.test-preload-034636 san=[192.168.39.247 192.168.39.247 localhost 127.0.0.1 minikube test-preload-034636]
I0224 23:03:55.938582 654330 provision.go:172] copyRemoteCerts
I0224 23:03:55.938649 654330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 23:03:55.938678 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:03:55.941427 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.941932 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:55.941972 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:55.942307 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:03:55.942557 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:55.942813 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:03:55.942987 654330 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/machines/test-preload-034636/id_rsa Username:docker}
I0224 23:03:56.038113 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0224 23:03:56.064823 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0224 23:03:56.090841 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0224 23:03:56.114551 654330 provision.go:86] duration metric: configureAuth took 322.565282ms
I0224 23:03:56.114591 654330 buildroot.go:189] setting minikube options for container-runtime
I0224 23:03:56.114855 654330 config.go:182] Loaded profile config "test-preload-034636": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0224 23:03:56.114871 654330 machine.go:91] provisioned docker machine in 598.941438ms
I0224 23:03:56.114880 654330 start.go:300] post-start starting for "test-preload-034636" (driver="kvm2")
I0224 23:03:56.114886 654330 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 23:03:56.114929 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:56.115340 654330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 23:03:56.115377 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:03:56.118288 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.118615 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:56.118630 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.118782 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:03:56.119039 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:56.119222 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:03:56.119394 654330 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/machines/test-preload-034636/id_rsa Username:docker}
I0224 23:03:56.208207 654330 ssh_runner.go:195] Run: cat /etc/os-release
I0224 23:03:56.212874 654330 info.go:137] Remote host: Buildroot 2021.02.12
I0224 23:03:56.212913 654330 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-632653/.minikube/addons for local assets ...
I0224 23:03:56.213026 654330 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-632653/.minikube/files for local assets ...
I0224 23:03:56.213113 654330 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-632653/.minikube/files/etc/ssl/certs/6397352.pem -> 6397352.pem in /etc/ssl/certs
I0224 23:03:56.213197 654330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 23:03:56.221722 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/files/etc/ssl/certs/6397352.pem --> /etc/ssl/certs/6397352.pem (1708 bytes)
I0224 23:03:56.246468 654330 start.go:303] post-start completed in 131.572278ms
I0224 23:03:56.246498 654330 fix.go:57] fixHost completed within 18.959033711s
I0224 23:03:56.246522 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:03:56.249181 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.249505 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:56.249542 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.249692 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:03:56.249905 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:56.250151 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:56.250353 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:03:56.250595 654330 main.go:141] libmachine: Using SSH client type: native
I0224 23:03:56.251132 654330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.247 22 <nil> <nil>}
I0224 23:03:56.251148 654330 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0224 23:03:56.371626 654330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677279836.314607661
I0224 23:03:56.371653 654330 fix.go:207] guest clock: 1677279836.314607661
I0224 23:03:56.371661 654330 fix.go:220] Guest: 2023-02-24 23:03:56.314607661 +0000 UTC Remote: 2023-02-24 23:03:56.246502592 +0000 UTC m=+32.427467995 (delta=68.105069ms)
I0224 23:03:56.371681 654330 fix.go:191] guest clock delta is within tolerance: 68.105069ms
I0224 23:03:56.371686 654330 start.go:83] releasing machines lock for "test-preload-034636", held for 19.084236476s
I0224 23:03:56.371707 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:56.372032 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetIP
I0224 23:03:56.374806 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.375146 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:56.375173 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.375413 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:56.375962 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:56.376154 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:03:56.376252 654330 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0224 23:03:56.376295 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:03:56.376374 654330 ssh_runner.go:195] Run: cat /version.json
I0224 23:03:56.376397 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:03:56.378930 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.379119 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.379318 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:56.379344 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.379440 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:03:56.379464 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:03:56.379515 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:03:56.379639 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:03:56.379764 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:56.379830 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:03:56.379909 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:03:56.380002 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:03:56.380084 654330 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/machines/test-preload-034636/id_rsa Username:docker}
I0224 23:03:56.380169 654330 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/machines/test-preload-034636/id_rsa Username:docker}
I0224 23:03:56.478201 654330 ssh_runner.go:195] Run: systemctl --version
I0224 23:03:56.484381 654330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0224 23:03:56.490076 654330 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0224 23:03:56.490161 654330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0224 23:03:56.506352 654330 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0224 23:03:56.506382 654330 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0224 23:03:56.506509 654330 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 23:04:00.545650 654330 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.039111199s)
I0224 23:04:00.545792 654330 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0224 23:04:00.545861 654330 ssh_runner.go:195] Run: which lz4
I0224 23:04:00.550709 654330 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0224 23:04:00.556045 654330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0224 23:04:00.556089 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0224 23:04:02.444438 654330 containerd.go:551] Took 1.893758 seconds to copy over tarball
I0224 23:04:02.444528 654330 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0224 23:04:05.755459 654330 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.310902308s)
I0224 23:04:05.755489 654330 containerd.go:558] Took 3.311014 seconds to extract the tarball
I0224 23:04:05.755499 654330 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0224 23:04:05.796156 654330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 23:04:05.902124 654330 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 23:04:05.920078 654330 start.go:485] detecting cgroup driver to use...
I0224 23:04:05.920196 654330 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0224 23:04:08.186351 654330 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.266113954s)
I0224 23:04:08.186447 654330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 23:04:08.201200 654330 docker.go:186] disabling cri-docker service (if available) ...
I0224 23:04:08.201285 654330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0224 23:04:08.215918 654330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0224 23:04:08.231009 654330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0224 23:04:08.336789 654330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0224 23:04:08.440743 654330 docker.go:202] disabling docker service ...
I0224 23:04:08.440817 654330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0224 23:04:08.455656 654330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0224 23:04:08.468129 654330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0224 23:04:08.580040 654330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0224 23:04:08.683358 654330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0224 23:04:08.697777 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 23:04:08.716977 654330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
I0224 23:04:08.727196 654330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 23:04:08.737880 654330 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 23:04:08.737939 654330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 23:04:08.748364 654330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 23:04:08.758590 654330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 23:04:08.768942 654330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 23:04:08.779565 654330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 23:04:08.790108 654330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 23:04:08.800010 654330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 23:04:08.809152 654330 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0224 23:04:08.809225 654330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0224 23:04:08.822921 654330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 23:04:08.832520 654330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 23:04:08.935215 654330 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 23:04:08.960858 654330 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
I0224 23:04:08.960949 654330 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0224 23:04:08.966785 654330 retry.go:31] will retry after 1.285176503s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0224 23:04:10.253335 654330 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0224 23:04:10.259716 654330 start.go:553] Will wait 60s for crictl version
I0224 23:04:10.259792 654330 ssh_runner.go:195] Run: which crictl
I0224 23:04:10.264039 654330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0224 23:04:10.300761 654330 start.go:569] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.15
RuntimeApiVersion: v1alpha2
I0224 23:04:10.300858 654330 ssh_runner.go:195] Run: containerd --version
I0224 23:04:10.333263 654330 ssh_runner.go:195] Run: containerd --version
I0224 23:04:10.365024 654330 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.15 ...
I0224 23:04:10.368489 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetIP
I0224 23:04:10.371547 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:04:10.371927 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:04:10.371981 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:04:10.372216 654330 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0224 23:04:10.377120 654330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 23:04:10.389765 654330 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0224 23:04:10.389853 654330 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 23:04:10.422597 654330 containerd.go:608] all images are preloaded for containerd runtime.
I0224 23:04:10.422624 654330 containerd.go:522] Images already preloaded, skipping extraction
I0224 23:04:10.422684 654330 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 23:04:10.454053 654330 containerd.go:608] all images are preloaded for containerd runtime.
I0224 23:04:10.454083 654330 cache_images.go:84] Images are preloaded, skipping loading
I0224 23:04:10.454139 654330 ssh_runner.go:195] Run: sudo crictl info
I0224 23:04:10.484647 654330 cni.go:84] Creating CNI manager for ""
I0224 23:04:10.484675 654330 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0224 23:04:10.484699 654330 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0224 23:04:10.484721 654330 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-034636 NodeName:test-preload-034636 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0224 23:04:10.484896 654330 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.247
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-034636"
kubeletExtraArgs:
node-ip: 192.168.39.247
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0224 23:04:10.485009 654330 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-034636 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.247
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-034636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0224 23:04:10.485084 654330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0224 23:04:10.496083 654330 binaries.go:44] Found k8s binaries, skipping transfer
I0224 23:04:10.496171 654330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0224 23:04:10.506611 654330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
I0224 23:04:10.523303 654330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0224 23:04:10.540030 654330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
I0224 23:04:10.557289 654330 ssh_runner.go:195] Run: grep 192.168.39.247 control-plane.minikube.internal$ /etc/hosts
I0224 23:04:10.561305 654330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 23:04:10.573958 654330 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636 for IP: 192.168.39.247
I0224 23:04:10.574034 654330 certs.go:186] acquiring lock for shared ca certs: {Name:mk55e384ddf43a3d526898f2ffd636851309dd48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 23:04:10.574202 654330 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-632653/.minikube/ca.key
I0224 23:04:10.574265 654330 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-632653/.minikube/proxy-client-ca.key
I0224 23:04:10.574350 654330 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/client.key
I0224 23:04:10.574418 654330 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/apiserver.key.890e8c75
I0224 23:04:10.574460 654330 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/proxy-client.key
I0224 23:04:10.574584 654330 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-632653/.minikube/certs/home/jenkins/minikube-integration/15909-632653/.minikube/certs/639735.pem (1338 bytes)
W0224 23:04:10.574618 654330 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-632653/.minikube/certs/home/jenkins/minikube-integration/15909-632653/.minikube/certs/639735_empty.pem, impossibly tiny 0 bytes
I0224 23:04:10.574630 654330 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-632653/.minikube/certs/home/jenkins/minikube-integration/15909-632653/.minikube/certs/ca-key.pem (1679 bytes)
I0224 23:04:10.574660 654330 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-632653/.minikube/certs/home/jenkins/minikube-integration/15909-632653/.minikube/certs/ca.pem (1082 bytes)
I0224 23:04:10.574688 654330 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-632653/.minikube/certs/home/jenkins/minikube-integration/15909-632653/.minikube/certs/cert.pem (1123 bytes)
I0224 23:04:10.574714 654330 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-632653/.minikube/certs/home/jenkins/minikube-integration/15909-632653/.minikube/certs/key.pem (1679 bytes)
I0224 23:04:10.574757 654330 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-632653/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-632653/.minikube/files/etc/ssl/certs/6397352.pem (1708 bytes)
I0224 23:04:10.575359 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0224 23:04:10.600713 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0224 23:04:10.624403 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0224 23:04:10.648760 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0224 23:04:10.674359 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0224 23:04:10.701462 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0224 23:04:10.730293 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0224 23:04:10.755585 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0224 23:04:10.781459 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/certs/639735.pem --> /usr/share/ca-certificates/639735.pem (1338 bytes)
I0224 23:04:10.807618 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/files/etc/ssl/certs/6397352.pem --> /usr/share/ca-certificates/6397352.pem (1708 bytes)
I0224 23:04:10.833131 654330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-632653/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0224 23:04:10.857828 654330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0224 23:04:10.876945 654330 ssh_runner.go:195] Run: openssl version
I0224 23:04:10.883342 654330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6397352.pem && ln -fs /usr/share/ca-certificates/6397352.pem /etc/ssl/certs/6397352.pem"
I0224 23:04:10.894280 654330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6397352.pem
I0224 23:04:10.899428 654330 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:28 /usr/share/ca-certificates/6397352.pem
I0224 23:04:10.899498 654330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6397352.pem
I0224 23:04:10.905409 654330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6397352.pem /etc/ssl/certs/3ec20f2e.0"
I0224 23:04:10.915737 654330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0224 23:04:10.926789 654330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0224 23:04:10.931983 654330 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:22 /usr/share/ca-certificates/minikubeCA.pem
I0224 23:04:10.932052 654330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0224 23:04:10.938044 654330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0224 23:04:10.949044 654330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/639735.pem && ln -fs /usr/share/ca-certificates/639735.pem /etc/ssl/certs/639735.pem"
I0224 23:04:10.960161 654330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/639735.pem
I0224 23:04:10.965440 654330 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:28 /usr/share/ca-certificates/639735.pem
I0224 23:04:10.965518 654330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/639735.pem
I0224 23:04:10.971456 654330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/639735.pem /etc/ssl/certs/51391683.0"
I0224 23:04:10.982021 654330 kubeadm.go:401] StartCluster: {Name:test-preload-034636 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-034636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0224 23:04:10.982170 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0224 23:04:10.982233 654330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0224 23:04:11.013130 654330 cri.go:87] found id: ""
I0224 23:04:11.013223 654330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0224 23:04:11.022856 654330 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0224 23:04:11.022891 654330 kubeadm.go:633] restartCluster start
I0224 23:04:11.022947 654330 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0224 23:04:11.032560 654330 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0224 23:04:11.033051 654330 kubeconfig.go:135] verify returned: extract IP: "test-preload-034636" does not appear in /home/jenkins/minikube-integration/15909-632653/kubeconfig
I0224 23:04:11.033177 654330 kubeconfig.go:146] "test-preload-034636" context is missing from /home/jenkins/minikube-integration/15909-632653/kubeconfig - will repair!
I0224 23:04:11.033546 654330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-632653/kubeconfig: {Name:mk66fe661d91a148579cde35b1bc63bca21297cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 23:04:11.034252 654330 kapi.go:59] client config for test-preload-034636: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/client.key", CAFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0224 23:04:11.035359 654330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0224 23:04:11.045386 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:11.045468 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:11.057885 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:11.558620 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:11.558738 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:11.570866 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:12.058327 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:12.058444 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:12.070497 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:12.558683 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:12.558796 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:12.570711 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:13.058289 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:13.058374 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:13.070356 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:13.559002 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:13.559122 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:13.570620 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:14.057975 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:14.058100 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:14.071426 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:14.558972 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:14.559060 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:14.571104 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:15.058738 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:15.058891 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:15.071587 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:15.558129 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:15.558227 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:15.569909 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:16.058490 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:16.058589 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:16.071463 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:16.558098 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:16.558210 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:16.570129 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:17.058729 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:17.058865 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:17.070647 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:17.558847 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:17.558963 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:17.570997 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:18.058615 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:18.058696 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:18.070065 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:18.558707 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:18.558837 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:18.570641 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:19.058873 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:19.058983 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:19.071385 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:19.557991 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:19.558115 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:19.570295 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:20.058833 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:20.058916 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:20.070993 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:20.558478 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:20.558570 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:20.570644 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:21.058449 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:21.058554 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:21.069742 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:21.069773 654330 api_server.go:165] Checking apiserver status ...
I0224 23:04:21.069829 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0224 23:04:21.080291 654330 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0224 23:04:21.080327 654330 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0224 23:04:21.080341 654330 kubeadm.go:1120] stopping kube-system containers ...
I0224 23:04:21.080357 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0224 23:04:21.080422 654330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0224 23:04:21.110066 654330 cri.go:87] found id: ""
I0224 23:04:21.110168 654330 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0224 23:04:21.125126 654330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0224 23:04:21.133817 654330 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0224 23:04:21.133892 654330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0224 23:04:21.143135 654330 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0224 23:04:21.143169 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0224 23:04:21.268662 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0224 23:04:22.340778 654330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.072073634s)
I0224 23:04:22.340822 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0224 23:04:22.677649 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0224 23:04:22.745435 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0224 23:04:22.819634 654330 api_server.go:51] waiting for apiserver process to appear ...
I0224 23:04:22.819705 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:23.332134 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:23.832416 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:24.331867 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:24.832051 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:25.331640 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:25.832483 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:26.332013 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:26.832318 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:27.332659 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:27.832084 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:28.331631 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:28.832623 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:29.331870 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:29.832275 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:30.332139 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:30.831646 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:31.332431 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:31.831920 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:32.332066 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:32.831599 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:33.332336 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:33.832005 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:34.332393 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:34.832284 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:35.331993 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:35.832587 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:36.332287 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:36.832060 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:37.332105 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:37.832054 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:38.331977 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:38.831767 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:39.332017 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:39.831791 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:04:39.846508 654330 api_server.go:71] duration metric: took 17.026877501s to wait for apiserver process to appear ...
I0224 23:04:39.846551 654330 api_server.go:87] waiting for apiserver healthz status ...
I0224 23:04:39.846575 654330 api_server.go:252] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
I0224 23:04:42.914450 654330 api_server.go:278] https://192.168.39.247:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0224 23:04:42.914481 654330 api_server.go:102] status: https://192.168.39.247:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0224 23:04:43.415238 654330 api_server.go:252] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
I0224 23:04:43.425154 654330 api_server.go:278] https://192.168.39.247:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0224 23:04:43.425188 654330 api_server.go:102] status: https://192.168.39.247:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0224 23:04:43.914761 654330 api_server.go:252] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
I0224 23:04:43.924730 654330 api_server.go:278] https://192.168.39.247:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0224 23:04:43.924762 654330 api_server.go:102] status: https://192.168.39.247:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0224 23:04:44.414874 654330 api_server.go:252] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
I0224 23:04:44.422510 654330 api_server.go:278] https://192.168.39.247:8443/healthz returned 200:
ok
I0224 23:04:44.432490 654330 api_server.go:140] control plane version: v1.24.4
I0224 23:04:44.432519 654330 api_server.go:130] duration metric: took 4.585961191s to wait for apiserver health ...
I0224 23:04:44.432529 654330 cni.go:84] Creating CNI manager for ""
I0224 23:04:44.432535 654330 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0224 23:04:44.434397 654330 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0224 23:04:44.435987 654330 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0224 23:04:44.447747 654330 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0224 23:04:44.481207 654330 system_pods.go:43] waiting for kube-system pods to appear ...
I0224 23:04:44.489671 654330 system_pods.go:59] 7 kube-system pods found
I0224 23:04:44.489714 654330 system_pods.go:61] "coredns-6d4b75cb6d-tv2qr" [6a602c52-853f-40b4-abf9-5bfe9edbc0c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0224 23:04:44.489719 654330 system_pods.go:61] "etcd-test-preload-034636" [40b45671-23cd-462f-a257-20dc46047623] Running
I0224 23:04:44.489724 654330 system_pods.go:61] "kube-apiserver-test-preload-034636" [40f2ec99-f47c-4aaf-8820-a9c2d1322d9a] Running
I0224 23:04:44.489735 654330 system_pods.go:61] "kube-controller-manager-test-preload-034636" [5f2d028d-7c16-406b-9a76-8245f7879c5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0224 23:04:44.489740 654330 system_pods.go:61] "kube-proxy-54nk7" [24bfee09-fc7f-432b-b322-64cb6a2442a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0224 23:04:44.489745 654330 system_pods.go:61] "kube-scheduler-test-preload-034636" [c54ac16b-3c0d-4342-ac95-fe65815cb74f] Running
I0224 23:04:44.489749 654330 system_pods.go:61] "storage-provisioner" [7619b280-6907-4244-a022-e385bf2c2712] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0224 23:04:44.489756 654330 system_pods.go:74] duration metric: took 8.523657ms to wait for pod list to return data ...
I0224 23:04:44.489762 654330 node_conditions.go:102] verifying NodePressure condition ...
I0224 23:04:44.498051 654330 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0224 23:04:44.498095 654330 node_conditions.go:123] node cpu capacity is 2
I0224 23:04:44.498107 654330 node_conditions.go:105] duration metric: took 8.340511ms to run NodePressure ...
I0224 23:04:44.498127 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0224 23:04:44.804194 654330 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0224 23:04:44.820081 654330 kubeadm.go:784] kubelet initialised
I0224 23:04:44.820112 654330 kubeadm.go:785] duration metric: took 15.879903ms waiting for restarted kubelet to initialise ...
I0224 23:04:44.820123 654330 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 23:04:44.826852 654330 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace to be "Ready" ...
I0224 23:04:44.834178 654330 pod_ready.go:97] node "test-preload-034636" hosting pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:44.834211 654330 pod_ready.go:81] duration metric: took 7.328132ms waiting for pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace to be "Ready" ...
E0224 23:04:44.834233 654330 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-034636" hosting pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:44.834243 654330 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:04:44.842988 654330 pod_ready.go:97] node "test-preload-034636" hosting pod "etcd-test-preload-034636" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:44.843019 654330 pod_ready.go:81] duration metric: took 8.768458ms waiting for pod "etcd-test-preload-034636" in "kube-system" namespace to be "Ready" ...
E0224 23:04:44.843030 654330 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-034636" hosting pod "etcd-test-preload-034636" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:44.843038 654330 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:04:44.851511 654330 pod_ready.go:97] node "test-preload-034636" hosting pod "kube-apiserver-test-preload-034636" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:44.851545 654330 pod_ready.go:81] duration metric: took 8.499389ms waiting for pod "kube-apiserver-test-preload-034636" in "kube-system" namespace to be "Ready" ...
E0224 23:04:44.851555 654330 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-034636" hosting pod "kube-apiserver-test-preload-034636" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:44.851561 654330 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:04:44.885153 654330 pod_ready.go:97] node "test-preload-034636" hosting pod "kube-controller-manager-test-preload-034636" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:44.885191 654330 pod_ready.go:81] duration metric: took 33.622925ms waiting for pod "kube-controller-manager-test-preload-034636" in "kube-system" namespace to be "Ready" ...
E0224 23:04:44.885204 654330 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-034636" hosting pod "kube-controller-manager-test-preload-034636" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:44.885214 654330 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-54nk7" in "kube-system" namespace to be "Ready" ...
I0224 23:04:45.285266 654330 pod_ready.go:97] node "test-preload-034636" hosting pod "kube-proxy-54nk7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:45.285294 654330 pod_ready.go:81] duration metric: took 400.072507ms waiting for pod "kube-proxy-54nk7" in "kube-system" namespace to be "Ready" ...
E0224 23:04:45.285303 654330 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-034636" hosting pod "kube-proxy-54nk7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:45.285309 654330 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:04:45.685289 654330 pod_ready.go:97] node "test-preload-034636" hosting pod "kube-scheduler-test-preload-034636" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:45.685321 654330 pod_ready.go:81] duration metric: took 400.006194ms waiting for pod "kube-scheduler-test-preload-034636" in "kube-system" namespace to be "Ready" ...
E0224 23:04:45.685332 654330 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-034636" hosting pod "kube-scheduler-test-preload-034636" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-034636" has status "Ready":"False"
I0224 23:04:45.685342 654330 pod_ready.go:38] duration metric: took 865.208408ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 23:04:45.685365 654330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0224 23:04:45.696643 654330 ops.go:34] apiserver oom_adj: -16
I0224 23:04:45.696670 654330 kubeadm.go:637] restartCluster took 34.673771336s
I0224 23:04:45.696682 654330 kubeadm.go:403] StartCluster complete in 34.714674386s
I0224 23:04:45.696704 654330 settings.go:142] acquiring lock: {Name:mk1f41949f6dbb022d08f7b4b617b379f1b350e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 23:04:45.696790 654330 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15909-632653/kubeconfig
I0224 23:04:45.697455 654330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-632653/kubeconfig: {Name:mk66fe661d91a148579cde35b1bc63bca21297cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 23:04:45.697718 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0224 23:04:45.697871 654330 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0224 23:04:45.697967 654330 addons.go:65] Setting storage-provisioner=true in profile "test-preload-034636"
I0224 23:04:45.697973 654330 config.go:182] Loaded profile config "test-preload-034636": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0224 23:04:45.698009 654330 addons.go:65] Setting default-storageclass=true in profile "test-preload-034636"
I0224 23:04:45.698035 654330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-034636"
I0224 23:04:45.698041 654330 addons.go:227] Setting addon storage-provisioner=true in "test-preload-034636"
W0224 23:04:45.698051 654330 addons.go:236] addon storage-provisioner should already be in state true
I0224 23:04:45.698126 654330 host.go:66] Checking if "test-preload-034636" exists ...
I0224 23:04:45.698421 654330 kapi.go:59] client config for test-preload-034636: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/client.key", CAFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0224 23:04:45.698550 654330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0224 23:04:45.698567 654330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0224 23:04:45.698592 654330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 23:04:45.698612 654330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 23:04:45.701600 654330 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-034636" context rescaled to 1 replicas
I0224 23:04:45.701645 654330 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0224 23:04:45.704308 654330 out.go:177] * Verifying Kubernetes components...
I0224 23:04:45.706358 654330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0224 23:04:45.715227 654330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
I0224 23:04:45.715324 654330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
I0224 23:04:45.715768 654330 main.go:141] libmachine: () Calling .GetVersion
I0224 23:04:45.715771 654330 main.go:141] libmachine: () Calling .GetVersion
I0224 23:04:45.716331 654330 main.go:141] libmachine: Using API Version 1
I0224 23:04:45.716352 654330 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 23:04:45.716356 654330 main.go:141] libmachine: Using API Version 1
I0224 23:04:45.716373 654330 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 23:04:45.716780 654330 main.go:141] libmachine: () Calling .GetMachineName
I0224 23:04:45.716802 654330 main.go:141] libmachine: () Calling .GetMachineName
I0224 23:04:45.717018 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetState
I0224 23:04:45.717677 654330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0224 23:04:45.717761 654330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 23:04:45.719795 654330 kapi.go:59] client config for test-preload-034636: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/profiles/test-preload-034636/client.key", CAFile:"/home/jenkins/minikube-integration/15909-632653/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0224 23:04:45.729174 654330 addons.go:227] Setting addon default-storageclass=true in "test-preload-034636"
W0224 23:04:45.729211 654330 addons.go:236] addon default-storageclass should already be in state true
I0224 23:04:45.729244 654330 host.go:66] Checking if "test-preload-034636" exists ...
I0224 23:04:45.729745 654330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0224 23:04:45.729804 654330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 23:04:45.734995 654330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
I0224 23:04:45.735473 654330 main.go:141] libmachine: () Calling .GetVersion
I0224 23:04:45.736011 654330 main.go:141] libmachine: Using API Version 1
I0224 23:04:45.736039 654330 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 23:04:45.736456 654330 main.go:141] libmachine: () Calling .GetMachineName
I0224 23:04:45.736692 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetState
I0224 23:04:45.738629 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:04:45.741296 654330 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0224 23:04:45.743056 654330 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0224 23:04:45.743079 654330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0224 23:04:45.743104 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:04:45.746272 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:04:45.746764 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:04:45.746812 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:04:45.747064 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:04:45.747304 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:04:45.747523 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:04:45.747690 654330 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/machines/test-preload-034636/id_rsa Username:docker}
I0224 23:04:45.748686 654330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44701
I0224 23:04:45.749082 654330 main.go:141] libmachine: () Calling .GetVersion
I0224 23:04:45.749584 654330 main.go:141] libmachine: Using API Version 1
I0224 23:04:45.749610 654330 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 23:04:45.749966 654330 main.go:141] libmachine: () Calling .GetMachineName
I0224 23:04:45.750494 654330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0224 23:04:45.750541 654330 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 23:04:45.766099 654330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
I0224 23:04:45.766630 654330 main.go:141] libmachine: () Calling .GetVersion
I0224 23:04:45.767282 654330 main.go:141] libmachine: Using API Version 1
I0224 23:04:45.767306 654330 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 23:04:45.767687 654330 main.go:141] libmachine: () Calling .GetMachineName
I0224 23:04:45.767931 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetState
I0224 23:04:45.769752 654330 main.go:141] libmachine: (test-preload-034636) Calling .DriverName
I0224 23:04:45.770057 654330 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0224 23:04:45.770079 654330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0224 23:04:45.770103 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHHostname
I0224 23:04:45.773593 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:04:45.774057 654330 main.go:141] libmachine: (test-preload-034636) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:99:04", ip: ""} in network mk-test-preload-034636: {Iface:virbr1 ExpiryTime:2023-02-25 00:03:49 +0000 UTC Type:0 Mac:52:54:00:60:99:04 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-034636 Clientid:01:52:54:00:60:99:04}
I0224 23:04:45.774092 654330 main.go:141] libmachine: (test-preload-034636) DBG | domain test-preload-034636 has defined IP address 192.168.39.247 and MAC address 52:54:00:60:99:04 in network mk-test-preload-034636
I0224 23:04:45.774260 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHPort
I0224 23:04:45.774485 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHKeyPath
I0224 23:04:45.774702 654330 main.go:141] libmachine: (test-preload-034636) Calling .GetSSHUsername
I0224 23:04:45.774919 654330 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-632653/.minikube/machines/test-preload-034636/id_rsa Username:docker}
I0224 23:04:45.915360 654330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0224 23:04:45.916811 654330 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0224 23:04:45.916821 654330 node_ready.go:35] waiting up to 6m0s for node "test-preload-034636" to be "Ready" ...
I0224 23:04:45.925357 654330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0224 23:04:46.874570 654330 main.go:141] libmachine: Making call to close driver server
I0224 23:04:46.874618 654330 main.go:141] libmachine: Making call to close driver server
I0224 23:04:46.874653 654330 main.go:141] libmachine: (test-preload-034636) Calling .Close
I0224 23:04:46.874722 654330 main.go:141] libmachine: (test-preload-034636) Calling .Close
I0224 23:04:46.875010 654330 main.go:141] libmachine: Successfully made call to close driver server
I0224 23:04:46.875032 654330 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 23:04:46.875043 654330 main.go:141] libmachine: Making call to close driver server
I0224 23:04:46.875056 654330 main.go:141] libmachine: (test-preload-034636) Calling .Close
I0224 23:04:46.875070 654330 main.go:141] libmachine: Successfully made call to close driver server
I0224 23:04:46.875092 654330 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 23:04:46.875093 654330 main.go:141] libmachine: (test-preload-034636) DBG | Closing plugin on server side
I0224 23:04:46.875105 654330 main.go:141] libmachine: Making call to close driver server
I0224 23:04:46.875070 654330 main.go:141] libmachine: (test-preload-034636) DBG | Closing plugin on server side
I0224 23:04:46.875119 654330 main.go:141] libmachine: (test-preload-034636) Calling .Close
I0224 23:04:46.875429 654330 main.go:141] libmachine: Successfully made call to close driver server
I0224 23:04:46.875473 654330 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 23:04:46.875473 654330 main.go:141] libmachine: (test-preload-034636) DBG | Closing plugin on server side
I0224 23:04:46.875520 654330 main.go:141] libmachine: Making call to close driver server
I0224 23:04:46.875534 654330 main.go:141] libmachine: (test-preload-034636) Calling .Close
I0224 23:04:46.875535 654330 main.go:141] libmachine: Successfully made call to close driver server
I0224 23:04:46.875563 654330 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 23:04:46.875763 654330 main.go:141] libmachine: (test-preload-034636) DBG | Closing plugin on server side
I0224 23:04:46.875809 654330 main.go:141] libmachine: Successfully made call to close driver server
I0224 23:04:46.875822 654330 main.go:141] libmachine: Making call to close connection to plugin binary
I0224 23:04:46.878330 654330 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0224 23:04:46.879920 654330 addons.go:492] enable addons completed in 1.182058198s: enabled=[storage-provisioner default-storageclass]
I0224 23:04:47.924902 654330 node_ready.go:58] node "test-preload-034636" has status "Ready":"False"
I0224 23:04:50.424929 654330 node_ready.go:58] node "test-preload-034636" has status "Ready":"False"
I0224 23:04:52.424966 654330 node_ready.go:58] node "test-preload-034636" has status "Ready":"False"
I0224 23:04:53.424734 654330 node_ready.go:49] node "test-preload-034636" has status "Ready":"True"
I0224 23:04:53.424776 654330 node_ready.go:38] duration metric: took 7.507931554s waiting for node "test-preload-034636" to be "Ready" ...
I0224 23:04:53.424789 654330 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 23:04:53.430039 654330 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace to be "Ready" ...
I0224 23:04:55.446941 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:04:57.942900 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:04:59.944844 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:02.443103 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:04.444170 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:06.942696 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:08.945482 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:11.444290 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:13.944108 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:15.949155 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:18.443005 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:20.444314 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:22.944869 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:25.441745 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:27.443451 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:29.444011 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:31.945939 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:34.443624 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:36.445809 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:38.943752 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:40.944205 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:43.444397 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:45.943717 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:47.943885 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:50.446646 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:52.943333 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:55.443052 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:57.452147 654330 pod_ready.go:102] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"False"
I0224 23:05:59.443786 654330 pod_ready.go:92] pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace has status "Ready":"True"
I0224 23:05:59.443824 654330 pod_ready.go:81] duration metric: took 1m6.013752561s waiting for pod "coredns-6d4b75cb6d-tv2qr" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.443837 654330 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.449475 654330 pod_ready.go:92] pod "etcd-test-preload-034636" in "kube-system" namespace has status "Ready":"True"
I0224 23:05:59.449504 654330 pod_ready.go:81] duration metric: took 5.656968ms waiting for pod "etcd-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.449516 654330 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.456219 654330 pod_ready.go:92] pod "kube-apiserver-test-preload-034636" in "kube-system" namespace has status "Ready":"True"
I0224 23:05:59.456242 654330 pod_ready.go:81] duration metric: took 6.71715ms waiting for pod "kube-apiserver-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.456255 654330 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.461540 654330 pod_ready.go:92] pod "kube-controller-manager-test-preload-034636" in "kube-system" namespace has status "Ready":"True"
I0224 23:05:59.461567 654330 pod_ready.go:81] duration metric: took 5.304784ms waiting for pod "kube-controller-manager-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.461577 654330 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-54nk7" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.466871 654330 pod_ready.go:92] pod "kube-proxy-54nk7" in "kube-system" namespace has status "Ready":"True"
I0224 23:05:59.466897 654330 pod_ready.go:81] duration metric: took 5.31359ms waiting for pod "kube-proxy-54nk7" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.466908 654330 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.841313 654330 pod_ready.go:92] pod "kube-scheduler-test-preload-034636" in "kube-system" namespace has status "Ready":"True"
I0224 23:05:59.841343 654330 pod_ready.go:81] duration metric: took 374.428188ms waiting for pod "kube-scheduler-test-preload-034636" in "kube-system" namespace to be "Ready" ...
I0224 23:05:59.841355 654330 pod_ready.go:38] duration metric: took 1m6.41655465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 23:05:59.841376 654330 api_server.go:51] waiting for apiserver process to appear ...
I0224 23:05:59.841431 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 23:05:59.841490 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 23:05:59.883809 654330 cri.go:87] found id: "f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5"
I0224 23:05:59.883838 654330 cri.go:87] found id: ""
I0224 23:05:59.883847 654330 logs.go:277] 1 containers: [f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5]
I0224 23:05:59.883925 654330 ssh_runner.go:195] Run: which crictl
I0224 23:05:59.888518 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 23:05:59.888593 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 23:05:59.920295 654330 cri.go:87] found id: "223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d"
I0224 23:05:59.920324 654330 cri.go:87] found id: ""
I0224 23:05:59.920333 654330 logs.go:277] 1 containers: [223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d]
I0224 23:05:59.920387 654330 ssh_runner.go:195] Run: which crictl
I0224 23:05:59.925366 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 23:05:59.925446 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 23:05:59.963269 654330 cri.go:87] found id: "ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed"
I0224 23:05:59.963302 654330 cri.go:87] found id: ""
I0224 23:05:59.963310 654330 logs.go:277] 1 containers: [ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed]
I0224 23:05:59.963368 654330 ssh_runner.go:195] Run: which crictl
I0224 23:05:59.967706 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 23:05:59.967801 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 23:05:59.999440 654330 cri.go:87] found id: "3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df"
I0224 23:05:59.999467 654330 cri.go:87] found id: ""
I0224 23:05:59.999475 654330 logs.go:277] 1 containers: [3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df]
I0224 23:05:59.999544 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:00.005264 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 23:06:00.005352 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 23:06:00.038076 654330 cri.go:87] found id: "f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d"
I0224 23:06:00.038113 654330 cri.go:87] found id: ""
I0224 23:06:00.038123 654330 logs.go:277] 1 containers: [f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d]
I0224 23:06:00.038194 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:00.043594 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 23:06:00.043683 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 23:06:00.075111 654330 cri.go:87] found id: "a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c"
I0224 23:06:00.075137 654330 cri.go:87] found id: ""
I0224 23:06:00.075145 654330 logs.go:277] 1 containers: [a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c]
I0224 23:06:00.075212 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:00.079763 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 23:06:00.079852 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 23:06:00.113002 654330 cri.go:87] found id: ""
I0224 23:06:00.113036 654330 logs.go:277] 0 containers: []
W0224 23:06:00.113043 654330 logs.go:279] No container was found matching "kindnet"
I0224 23:06:00.113060 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 23:06:00.113125 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 23:06:00.154594 654330 cri.go:87] found id: "6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae"
I0224 23:06:00.154628 654330 cri.go:87] found id: "441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6"
I0224 23:06:00.154636 654330 cri.go:87] found id: ""
I0224 23:06:00.154646 654330 logs.go:277] 2 containers: [6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae 441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6]
I0224 23:06:00.154710 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:00.158918 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:00.162608 654330 logs.go:123] Gathering logs for etcd [223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d] ...
I0224 23:06:00.162635 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d"
I0224 23:06:00.200975 654330 logs.go:123] Gathering logs for coredns [ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed] ...
I0224 23:06:00.201015 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed"
I0224 23:06:00.237825 654330 logs.go:123] Gathering logs for kube-controller-manager [a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c] ...
I0224 23:06:00.237862 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c"
I0224 23:06:00.288501 654330 logs.go:123] Gathering logs for storage-provisioner [6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae] ...
I0224 23:06:00.288544 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae"
I0224 23:06:00.332803 654330 logs.go:123] Gathering logs for container status ...
I0224 23:06:00.332836 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 23:06:00.377620 654330 logs.go:123] Gathering logs for describe nodes ...
I0224 23:06:00.377652 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 23:06:00.518976 654330 logs.go:123] Gathering logs for kube-apiserver [f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5] ...
I0224 23:06:00.519020 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5"
I0224 23:06:00.564349 654330 logs.go:123] Gathering logs for kube-scheduler [3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df] ...
I0224 23:06:00.564385 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df"
I0224 23:06:00.599869 654330 logs.go:123] Gathering logs for kube-proxy [f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d] ...
I0224 23:06:00.599917 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d"
I0224 23:06:00.633373 654330 logs.go:123] Gathering logs for storage-provisioner [441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6] ...
I0224 23:06:00.633409 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6"
I0224 23:06:00.663984 654330 logs.go:123] Gathering logs for containerd ...
I0224 23:06:00.664017 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 23:06:00.711681 654330 logs.go:123] Gathering logs for kubelet ...
I0224 23:06:00.711723 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0224 23:06:00.813510 654330 logs.go:123] Gathering logs for dmesg ...
I0224 23:06:00.813560 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 23:06:03.327749 654330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 23:06:03.342968 654330 api_server.go:71] duration metric: took 1m17.641277877s to wait for apiserver process to appear ...
I0224 23:06:03.343014 654330 api_server.go:87] waiting for apiserver healthz status ...
I0224 23:06:03.343054 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 23:06:03.343122 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 23:06:03.378592 654330 cri.go:87] found id: "f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5"
I0224 23:06:03.378626 654330 cri.go:87] found id: ""
I0224 23:06:03.378635 654330 logs.go:277] 1 containers: [f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5]
I0224 23:06:03.378695 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:03.382977 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 23:06:03.383059 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 23:06:03.417075 654330 cri.go:87] found id: "223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d"
I0224 23:06:03.417104 654330 cri.go:87] found id: ""
I0224 23:06:03.417111 654330 logs.go:277] 1 containers: [223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d]
I0224 23:06:03.417181 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:03.421983 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 23:06:03.422070 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 23:06:03.453089 654330 cri.go:87] found id: "ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed"
I0224 23:06:03.453123 654330 cri.go:87] found id: ""
I0224 23:06:03.453133 654330 logs.go:277] 1 containers: [ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed]
I0224 23:06:03.453199 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:03.457371 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 23:06:03.457447 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 23:06:03.495261 654330 cri.go:87] found id: "3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df"
I0224 23:06:03.495296 654330 cri.go:87] found id: ""
I0224 23:06:03.495306 654330 logs.go:277] 1 containers: [3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df]
I0224 23:06:03.495377 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:03.500083 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 23:06:03.500159 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 23:06:03.538395 654330 cri.go:87] found id: "f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d"
I0224 23:06:03.538426 654330 cri.go:87] found id: ""
I0224 23:06:03.538435 654330 logs.go:277] 1 containers: [f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d]
I0224 23:06:03.538492 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:03.543495 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 23:06:03.543580 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 23:06:03.576667 654330 cri.go:87] found id: "a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c"
I0224 23:06:03.576700 654330 cri.go:87] found id: ""
I0224 23:06:03.576711 654330 logs.go:277] 1 containers: [a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c]
I0224 23:06:03.576775 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:03.581219 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 23:06:03.581323 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 23:06:03.612255 654330 cri.go:87] found id: ""
I0224 23:06:03.612292 654330 logs.go:277] 0 containers: []
W0224 23:06:03.612303 654330 logs.go:279] No container was found matching "kindnet"
I0224 23:06:03.612311 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 23:06:03.612392 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 23:06:03.648065 654330 cri.go:87] found id: "6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae"
I0224 23:06:03.648099 654330 cri.go:87] found id: "441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6"
I0224 23:06:03.648107 654330 cri.go:87] found id: ""
I0224 23:06:03.648116 654330 logs.go:277] 2 containers: [6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae 441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6]
I0224 23:06:03.648177 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:03.652433 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:03.656859 654330 logs.go:123] Gathering logs for storage-provisioner [441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6] ...
I0224 23:06:03.656893 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6"
I0224 23:06:03.687877 654330 logs.go:123] Gathering logs for container status ...
I0224 23:06:03.687914 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 23:06:03.726341 654330 logs.go:123] Gathering logs for kubelet ...
I0224 23:06:03.726394 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0224 23:06:03.828851 654330 logs.go:123] Gathering logs for describe nodes ...
I0224 23:06:03.828909 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 23:06:03.949108 654330 logs.go:123] Gathering logs for kube-apiserver [f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5] ...
I0224 23:06:03.949153 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5"
I0224 23:06:03.989466 654330 logs.go:123] Gathering logs for etcd [223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d] ...
I0224 23:06:03.989512 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d"
I0224 23:06:04.030207 654330 logs.go:123] Gathering logs for coredns [ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed] ...
I0224 23:06:04.030247 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed"
I0224 23:06:04.063288 654330 logs.go:123] Gathering logs for containerd ...
I0224 23:06:04.063324 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 23:06:04.111184 654330 logs.go:123] Gathering logs for dmesg ...
I0224 23:06:04.111234 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 23:06:04.130230 654330 logs.go:123] Gathering logs for kube-scheduler [3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df] ...
I0224 23:06:04.130264 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df"
I0224 23:06:04.164876 654330 logs.go:123] Gathering logs for kube-proxy [f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d] ...
I0224 23:06:04.164915 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d"
I0224 23:06:04.210519 654330 logs.go:123] Gathering logs for kube-controller-manager [a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c] ...
I0224 23:06:04.210556 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c"
I0224 23:06:04.253269 654330 logs.go:123] Gathering logs for storage-provisioner [6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae] ...
I0224 23:06:04.253308 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae"
I0224 23:06:06.789816 654330 api_server.go:252] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
I0224 23:06:06.798003 654330 api_server.go:278] https://192.168.39.247:8443/healthz returned 200:
ok
I0224 23:06:06.799131 654330 api_server.go:140] control plane version: v1.24.4
I0224 23:06:06.799161 654330 api_server.go:130] duration metric: took 3.456138891s to wait for apiserver health ...
I0224 23:06:06.799172 654330 system_pods.go:43] waiting for kube-system pods to appear ...
I0224 23:06:06.799223 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 23:06:06.799294 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 23:06:06.836095 654330 cri.go:87] found id: "f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5"
I0224 23:06:06.836133 654330 cri.go:87] found id: ""
I0224 23:06:06.836143 654330 logs.go:277] 1 containers: [f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5]
I0224 23:06:06.836248 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:06.841758 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 23:06:06.841848 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 23:06:06.873563 654330 cri.go:87] found id: "223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d"
I0224 23:06:06.873590 654330 cri.go:87] found id: ""
I0224 23:06:06.873598 654330 logs.go:277] 1 containers: [223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d]
I0224 23:06:06.873660 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:06.877852 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 23:06:06.877934 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 23:06:06.911975 654330 cri.go:87] found id: "ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed"
I0224 23:06:06.912006 654330 cri.go:87] found id: ""
I0224 23:06:06.912019 654330 logs.go:277] 1 containers: [ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed]
I0224 23:06:06.912083 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:06.916703 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 23:06:06.916764 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 23:06:06.947450 654330 cri.go:87] found id: "3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df"
I0224 23:06:06.947479 654330 cri.go:87] found id: ""
I0224 23:06:06.947489 654330 logs.go:277] 1 containers: [3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df]
I0224 23:06:06.947540 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:06.951905 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 23:06:06.951972 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 23:06:06.983280 654330 cri.go:87] found id: "f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d"
I0224 23:06:06.983309 654330 cri.go:87] found id: ""
I0224 23:06:06.983318 654330 logs.go:277] 1 containers: [f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d]
I0224 23:06:06.983386 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:06.987752 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 23:06:06.987844 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 23:06:07.025944 654330 cri.go:87] found id: "a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c"
I0224 23:06:07.025986 654330 cri.go:87] found id: ""
I0224 23:06:07.025997 654330 logs.go:277] 1 containers: [a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c]
I0224 23:06:07.026051 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:07.031384 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 23:06:07.031456 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 23:06:07.063013 654330 cri.go:87] found id: ""
I0224 23:06:07.063053 654330 logs.go:277] 0 containers: []
W0224 23:06:07.063064 654330 logs.go:279] No container was found matching "kindnet"
I0224 23:06:07.063072 654330 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 23:06:07.063147 654330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 23:06:07.092884 654330 cri.go:87] found id: "6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae"
I0224 23:06:07.092919 654330 cri.go:87] found id: "441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6"
I0224 23:06:07.092926 654330 cri.go:87] found id: ""
I0224 23:06:07.092936 654330 logs.go:277] 2 containers: [6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae 441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6]
I0224 23:06:07.093004 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:07.097275 654330 ssh_runner.go:195] Run: which crictl
I0224 23:06:07.101809 654330 logs.go:123] Gathering logs for dmesg ...
I0224 23:06:07.101837 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 23:06:07.115671 654330 logs.go:123] Gathering logs for describe nodes ...
I0224 23:06:07.115705 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 23:06:07.259307 654330 logs.go:123] Gathering logs for coredns [ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed] ...
I0224 23:06:07.259352 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed"
I0224 23:06:07.308611 654330 logs.go:123] Gathering logs for kube-scheduler [3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df] ...
I0224 23:06:07.308662 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df"
I0224 23:06:07.342123 654330 logs.go:123] Gathering logs for containerd ...
I0224 23:06:07.342172 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 23:06:07.388399 654330 logs.go:123] Gathering logs for container status ...
I0224 23:06:07.388456 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 23:06:07.423564 654330 logs.go:123] Gathering logs for storage-provisioner [441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6] ...
I0224 23:06:07.423612 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6"
I0224 23:06:07.456421 654330 logs.go:123] Gathering logs for kubelet ...
I0224 23:06:07.456465 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0224 23:06:07.554075 654330 logs.go:123] Gathering logs for kube-apiserver [f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5] ...
I0224 23:06:07.554132 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5"
I0224 23:06:07.590969 654330 logs.go:123] Gathering logs for etcd [223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d] ...
I0224 23:06:07.591017 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d"
I0224 23:06:07.631438 654330 logs.go:123] Gathering logs for kube-proxy [f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d] ...
I0224 23:06:07.631478 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d"
I0224 23:06:07.670279 654330 logs.go:123] Gathering logs for kube-controller-manager [a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c] ...
I0224 23:06:07.670314 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c"
I0224 23:06:07.721542 654330 logs.go:123] Gathering logs for storage-provisioner [6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae] ...
I0224 23:06:07.721594 654330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae"
I0224 23:06:10.265150 654330 system_pods.go:59] 7 kube-system pods found
I0224 23:06:10.265185 654330 system_pods.go:61] "coredns-6d4b75cb6d-tv2qr" [6a602c52-853f-40b4-abf9-5bfe9edbc0c7] Running
I0224 23:06:10.265191 654330 system_pods.go:61] "etcd-test-preload-034636" [40b45671-23cd-462f-a257-20dc46047623] Running
I0224 23:06:10.265195 654330 system_pods.go:61] "kube-apiserver-test-preload-034636" [40f2ec99-f47c-4aaf-8820-a9c2d1322d9a] Running
I0224 23:06:10.265200 654330 system_pods.go:61] "kube-controller-manager-test-preload-034636" [5f2d028d-7c16-406b-9a76-8245f7879c5a] Running
I0224 23:06:10.265204 654330 system_pods.go:61] "kube-proxy-54nk7" [24bfee09-fc7f-432b-b322-64cb6a2442a0] Running
I0224 23:06:10.265208 654330 system_pods.go:61] "kube-scheduler-test-preload-034636" [c54ac16b-3c0d-4342-ac95-fe65815cb74f] Running
I0224 23:06:10.265211 654330 system_pods.go:61] "storage-provisioner" [7619b280-6907-4244-a022-e385bf2c2712] Running
I0224 23:06:10.265218 654330 system_pods.go:74] duration metric: took 3.466039493s to wait for pod list to return data ...
I0224 23:06:10.265225 654330 default_sa.go:34] waiting for default service account to be created ...
I0224 23:06:10.268112 654330 default_sa.go:45] found service account: "default"
I0224 23:06:10.268144 654330 default_sa.go:55] duration metric: took 2.914102ms for default service account to be created ...
I0224 23:06:10.268158 654330 system_pods.go:116] waiting for k8s-apps to be running ...
I0224 23:06:10.274104 654330 system_pods.go:86] 7 kube-system pods found
I0224 23:06:10.274149 654330 system_pods.go:89] "coredns-6d4b75cb6d-tv2qr" [6a602c52-853f-40b4-abf9-5bfe9edbc0c7] Running
I0224 23:06:10.274155 654330 system_pods.go:89] "etcd-test-preload-034636" [40b45671-23cd-462f-a257-20dc46047623] Running
I0224 23:06:10.274159 654330 system_pods.go:89] "kube-apiserver-test-preload-034636" [40f2ec99-f47c-4aaf-8820-a9c2d1322d9a] Running
I0224 23:06:10.274163 654330 system_pods.go:89] "kube-controller-manager-test-preload-034636" [5f2d028d-7c16-406b-9a76-8245f7879c5a] Running
I0224 23:06:10.274167 654330 system_pods.go:89] "kube-proxy-54nk7" [24bfee09-fc7f-432b-b322-64cb6a2442a0] Running
I0224 23:06:10.274171 654330 system_pods.go:89] "kube-scheduler-test-preload-034636" [c54ac16b-3c0d-4342-ac95-fe65815cb74f] Running
I0224 23:06:10.274175 654330 system_pods.go:89] "storage-provisioner" [7619b280-6907-4244-a022-e385bf2c2712] Running
I0224 23:06:10.274183 654330 system_pods.go:126] duration metric: took 6.019884ms to wait for k8s-apps to be running ...
I0224 23:06:10.274190 654330 system_svc.go:44] waiting for kubelet service to be running ....
I0224 23:06:10.274239 654330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0224 23:06:10.289214 654330 system_svc.go:56] duration metric: took 15.010736ms WaitForService to wait for kubelet.
I0224 23:06:10.289254 654330 kubeadm.go:578] duration metric: took 1m24.587576051s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0224 23:06:10.289276 654330 node_conditions.go:102] verifying NodePressure condition ...
I0224 23:06:10.293389 654330 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0224 23:06:10.293430 654330 node_conditions.go:123] node cpu capacity is 2
I0224 23:06:10.293448 654330 node_conditions.go:105] duration metric: took 4.165774ms to run NodePressure ...
I0224 23:06:10.293464 654330 start.go:228] waiting for startup goroutines ...
I0224 23:06:10.293473 654330 start.go:233] waiting for cluster config update ...
I0224 23:06:10.293489 654330 start.go:242] writing updated cluster config ...
I0224 23:06:10.293892 654330 ssh_runner.go:195] Run: rm -f paused
I0224 23:06:10.348692 654330 start.go:555] kubectl: 1.26.1, cluster: 1.24.4 (minor skew: 2)
I0224 23:06:10.351282 654330 out.go:177]
W0224 23:06:10.353074 654330 out.go:239] ! /usr/local/bin/kubectl is version 1.26.1, which may have incompatibilities with Kubernetes 1.24.4.
I0224 23:06:10.354793 654330 out.go:177] - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
I0224 23:06:10.356620 654330 out.go:177] * Done! kubectl is now configured to use "test-preload-034636" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
ec0d9007b2187 a4ca41631cc7a 12 seconds ago Running coredns 1 2b36394870616
6849df7b14774 6e38f40d628db 29 seconds ago Running storage-provisioner 2 acb2f9152e2d7
f11356fcbdbef 7a53d1e08ef58 45 seconds ago Running kube-proxy 1 144c414e8d5c0
441cc62483e6c 6e38f40d628db 59 seconds ago Exited storage-provisioner 1 acb2f9152e2d7
3c7d342b788c9 03fa22539fc1c About a minute ago Running kube-scheduler 1 5ba8dd9591a40
a5963d182fe31 1f99cb6da9a82 About a minute ago Running kube-controller-manager 1 7f06b5fd8fc5f
f08b376d9dfb0 6cab9d1bed1be About a minute ago Running kube-apiserver 1 3de1c804d4d16
223df8456f952 aebe758cef4cd About a minute ago Running etcd 1 82a9ab63468c1
*
* ==> containerd <==
* -- Journal begins at Fri 2023-02-24 23:03:49 UTC, ends at Fri 2023-02-24 23:06:11 UTC. --
Feb 24 23:05:11 test-preload-034636 containerd[630]: time="2023-02-24T23:05:11.943620640Z" level=info msg="CreateContainer within sandbox \"acb2f9152e2d7929978a38b5c82a6c783b91fc2b5d250ca0bbaf6a6f97f61885\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6\""
Feb 24 23:05:11 test-preload-034636 containerd[630]: time="2023-02-24T23:05:11.949420694Z" level=info msg="StartContainer for \"441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6\""
Feb 24 23:05:12 test-preload-034636 containerd[630]: time="2023-02-24T23:05:12.026890641Z" level=info msg="StartContainer for \"441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6\" returns successfully"
Feb 24 23:05:16 test-preload-034636 containerd[630]: time="2023-02-24T23:05:16.907367460Z" level=info msg="CreateContainer within sandbox \"2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Feb 24 23:05:16 test-preload-034636 containerd[630]: time="2023-02-24T23:05:16.926061795Z" level=error msg="CreateContainer within sandbox \"2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0\" for &ContainerMetadata{Name:coredns,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1776606681 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists"
Feb 24 23:05:25 test-preload-034636 containerd[630]: time="2023-02-24T23:05:25.907952126Z" level=info msg="CreateContainer within sandbox \"144c414e8d5c00f3aca76a8cd27c9db682c5d45b907bd66c63d7c93a7346410c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
Feb 24 23:05:25 test-preload-034636 containerd[630]: time="2023-02-24T23:05:25.957591915Z" level=info msg="CreateContainer within sandbox \"144c414e8d5c00f3aca76a8cd27c9db682c5d45b907bd66c63d7c93a7346410c\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d\""
Feb 24 23:05:25 test-preload-034636 containerd[630]: time="2023-02-24T23:05:25.958838871Z" level=info msg="StartContainer for \"f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d\""
Feb 24 23:05:26 test-preload-034636 containerd[630]: time="2023-02-24T23:05:26.053983362Z" level=info msg="StartContainer for \"f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d\" returns successfully"
Feb 24 23:05:29 test-preload-034636 containerd[630]: time="2023-02-24T23:05:29.907626152Z" level=info msg="CreateContainer within sandbox \"2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Feb 24 23:05:29 test-preload-034636 containerd[630]: time="2023-02-24T23:05:29.921902967Z" level=error msg="CreateContainer within sandbox \"2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0\" for &ContainerMetadata{Name:coredns,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-819311406 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists"
Feb 24 23:05:42 test-preload-034636 containerd[630]: time="2023-02-24T23:05:42.129951457Z" level=info msg="shim disconnected" id=441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6
Feb 24 23:05:42 test-preload-034636 containerd[630]: time="2023-02-24T23:05:42.130068759Z" level=warning msg="cleaning up after shim disconnected" id=441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6 namespace=k8s.io
Feb 24 23:05:42 test-preload-034636 containerd[630]: time="2023-02-24T23:05:42.130086077Z" level=info msg="cleaning up dead shim"
Feb 24 23:05:42 test-preload-034636 containerd[630]: time="2023-02-24T23:05:42.147936381Z" level=warning msg="cleanup warnings time=\"2023-02-24T23:05:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1610 runtime=io.containerd.runc.v2\n"
Feb 24 23:05:42 test-preload-034636 containerd[630]: time="2023-02-24T23:05:42.172723135Z" level=info msg="CreateContainer within sandbox \"acb2f9152e2d7929978a38b5c82a6c783b91fc2b5d250ca0bbaf6a6f97f61885\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
Feb 24 23:05:42 test-preload-034636 containerd[630]: time="2023-02-24T23:05:42.214302500Z" level=info msg="CreateContainer within sandbox \"acb2f9152e2d7929978a38b5c82a6c783b91fc2b5d250ca0bbaf6a6f97f61885\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae\""
Feb 24 23:05:42 test-preload-034636 containerd[630]: time="2023-02-24T23:05:42.215640558Z" level=info msg="StartContainer for \"6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae\""
Feb 24 23:05:42 test-preload-034636 containerd[630]: time="2023-02-24T23:05:42.316565521Z" level=info msg="StartContainer for \"6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae\" returns successfully"
Feb 24 23:05:43 test-preload-034636 containerd[630]: time="2023-02-24T23:05:43.908361587Z" level=info msg="CreateContainer within sandbox \"2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Feb 24 23:05:43 test-preload-034636 containerd[630]: time="2023-02-24T23:05:43.948392895Z" level=error msg="CreateContainer within sandbox \"2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0\" for &ContainerMetadata{Name:coredns,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2024837891 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists"
Feb 24 23:05:58 test-preload-034636 containerd[630]: time="2023-02-24T23:05:58.908093827Z" level=info msg="CreateContainer within sandbox \"2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Feb 24 23:05:58 test-preload-034636 containerd[630]: time="2023-02-24T23:05:58.958274848Z" level=info msg="CreateContainer within sandbox \"2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed\""
Feb 24 23:05:58 test-preload-034636 containerd[630]: time="2023-02-24T23:05:58.960705118Z" level=info msg="StartContainer for \"ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed\""
Feb 24 23:05:59 test-preload-034636 containerd[630]: time="2023-02-24T23:05:59.053773830Z" level=info msg="StartContainer for \"ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed\" returns successfully"
*
* ==> coredns [ec0d9007b21873eb725297b9eaa39a29cd66eaf9e87428a817eb112e4d5659ed] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] 127.0.0.1:47177 - 2361 "HINFO IN 2631310062541767441.2730700751081588516. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015682838s
*
* ==> describe nodes <==
* Name: test-preload-034636
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-034636
kubernetes.io/os=linux
minikube.k8s.io/commit=08976559d74fb9c2654733dc21cb8f9d9ec24374
minikube.k8s.io/name=test-preload-034636
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_02_24T23_00_55_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 24 Feb 2023 23:00:51 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-034636
AcquireTime: <unset>
RenewTime: Fri, 24 Feb 2023 23:06:05 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 24 Feb 2023 23:04:53 +0000 Fri, 24 Feb 2023 23:00:48 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 24 Feb 2023 23:04:53 +0000 Fri, 24 Feb 2023 23:00:48 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 24 Feb 2023 23:04:53 +0000 Fri, 24 Feb 2023 23:00:48 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 24 Feb 2023 23:04:53 +0000 Fri, 24 Feb 2023 23:04:53 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.247
Hostname: test-preload-034636
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 412983c41c6a4a219b4a5d187ddac87f
System UUID: 412983c4-1c6a-4a21-9b4a-5d187ddac87f
Boot ID: 0e1984b2-5405-4c0f-8226-a5cad3ff097b
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.15
Kubelet Version: v1.24.4
Kube-Proxy Version: v1.24.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-tv2qr 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 5m4s
kube-system etcd-test-preload-034636 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 5m16s
kube-system kube-apiserver-test-preload-034636 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m18s
kube-system kube-controller-manager-test-preload-034636 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m17s
kube-system kube-proxy-54nk7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m4s
kube-system kube-scheduler-test-preload-034636 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m16s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m1s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 45s kube-proxy
Normal Starting 5m1s kube-proxy
Normal NodeHasSufficientMemory 5m26s (x5 over 5m26s) kubelet Node test-preload-034636 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m26s (x5 over 5m26s) kubelet Node test-preload-034636 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m26s (x4 over 5m26s) kubelet Node test-preload-034636 status is now: NodeHasSufficientPID
Normal Starting 5m16s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m16s kubelet Node test-preload-034636 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m16s kubelet Node test-preload-034636 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m16s kubelet Node test-preload-034636 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m16s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 5m6s kubelet Node test-preload-034636 status is now: NodeReady
Normal RegisteredNode 5m4s node-controller Node test-preload-034636 event: Registered Node test-preload-034636 in Controller
Normal Starting 109s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 109s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 108s (x8 over 109s) kubelet Node test-preload-034636 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 108s (x8 over 109s) kubelet Node test-preload-034636 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 108s (x7 over 109s) kubelet Node test-preload-034636 status is now: NodeHasSufficientPID
Normal RegisteredNode 76s node-controller Node test-preload-034636 event: Registered Node test-preload-034636 in Controller
*
* ==> dmesg <==
* [Feb24 23:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.074134] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.092137] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.523321] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.148913] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.795621] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
[Feb24 23:04] systemd-fstab-generator[528]: Ignoring "noauto" for root device
[ +2.441308] systemd-fstab-generator[559]: Ignoring "noauto" for root device
[ +0.101881] systemd-fstab-generator[570]: Ignoring "noauto" for root device
[ +0.136549] systemd-fstab-generator[583]: Ignoring "noauto" for root device
[ +0.103817] systemd-fstab-generator[594]: Ignoring "noauto" for root device
[ +0.256894] systemd-fstab-generator[621]: Ignoring "noauto" for root device
[ +13.728317] systemd-fstab-generator[816]: Ignoring "noauto" for root device
[ +28.963686] kauditd_printk_skb: 7 callbacks suppressed
[Feb24 23:05] kauditd_printk_skb: 8 callbacks suppressed
*
* ==> etcd [223df8456f952b545fc77398806df4285fa734229ab1ee9fb2038e52d3cd5a5d] <==
* {"level":"info","ts":"2023-02-24T23:04:24.251Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b60ca5935c0b4769","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-02-24T23:04:24.252Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-02-24T23:04:24.252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 switched to configuration voters=(13118041866946430825)"}
{"level":"info","ts":"2023-02-24T23:04:24.253Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fda2fc0436a8884","local-member-id":"b60ca5935c0b4769","added-peer-id":"b60ca5935c0b4769","added-peer-peer-urls":["https://192.168.39.247:2380"]}
{"level":"info","ts":"2023-02-24T23:04:24.253Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fda2fc0436a8884","local-member-id":"b60ca5935c0b4769","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-24T23:04:24.253Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-24T23:04:24.254Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-24T23:04:24.254Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b60ca5935c0b4769","initial-advertise-peer-urls":["https://192.168.39.247:2380"],"listen-peer-urls":["https://192.168.39.247:2380"],"advertise-client-urls":["https://192.168.39.247:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.247:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-02-24T23:04:24.254Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-02-24T23:04:24.254Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.247:2380"}
{"level":"info","ts":"2023-02-24T23:04:24.254Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.247:2380"}
{"level":"info","ts":"2023-02-24T23:04:25.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 is starting a new election at term 2"}
{"level":"info","ts":"2023-02-24T23:04:25.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became pre-candidate at term 2"}
{"level":"info","ts":"2023-02-24T23:04:25.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 received MsgPreVoteResp from b60ca5935c0b4769 at term 2"}
{"level":"info","ts":"2023-02-24T23:04:25.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became candidate at term 3"}
{"level":"info","ts":"2023-02-24T23:04:25.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 received MsgVoteResp from b60ca5935c0b4769 at term 3"}
{"level":"info","ts":"2023-02-24T23:04:25.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became leader at term 3"}
{"level":"info","ts":"2023-02-24T23:04:25.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b60ca5935c0b4769 elected leader b60ca5935c0b4769 at term 3"}
{"level":"info","ts":"2023-02-24T23:04:25.834Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b60ca5935c0b4769","local-member-attributes":"{Name:test-preload-034636 ClientURLs:[https://192.168.39.247:2379]}","request-path":"/0/members/b60ca5935c0b4769/attributes","cluster-id":"7fda2fc0436a8884","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-24T23:04:25.834Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-24T23:04:25.836Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.247:2379"}
{"level":"info","ts":"2023-02-24T23:04:25.836Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-24T23:04:25.837Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-24T23:04:25.837Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-02-24T23:04:25.837Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 23:06:11 up 2 min, 0 users, load average: 0.83, 0.49, 0.19
Linux test-preload-034636 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [f08b376d9dfb09119e5345da8cdc9efec0f0c42420d4eaae1253c16cfa80e7d5] <==
* I0224 23:04:42.869995 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0224 23:04:42.870122 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0224 23:04:42.870355 1 crd_finalizer.go:266] Starting CRDFinalizer
I0224 23:04:42.875114 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0224 23:04:42.875901 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0224 23:04:42.910560 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0224 23:04:42.911284 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0224 23:04:42.966441 1 cache.go:39] Caches are synced for autoregister controller
I0224 23:04:42.968309 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0224 23:04:42.968812 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0224 23:04:42.969418 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0224 23:04:42.975894 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0224 23:04:43.011828 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0224 23:04:43.026614 1 shared_informer.go:262] Caches are synced for node_authorizer
I0224 23:04:43.033270 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0224 23:04:43.525541 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0224 23:04:43.872760 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0224 23:04:44.647623 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0224 23:04:44.667148 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0224 23:04:44.733230 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0224 23:04:44.765556 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0224 23:04:44.783534 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0224 23:04:55.446092 1 controller.go:611] quota admission added evaluator for: endpoints
I0224 23:04:55.453443 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0224 23:05:26.282264 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-controller-manager [a5963d182fe311e5efe56ba91e12d79056d9b822c775fec5e5854d7316b9e36c] <==
* I0224 23:04:55.432221 1 shared_informer.go:262] Caches are synced for expand
I0224 23:04:55.435083 1 shared_informer.go:262] Caches are synced for GC
I0224 23:04:55.435111 1 shared_informer.go:262] Caches are synced for attach detach
I0224 23:04:55.437039 1 shared_informer.go:262] Caches are synced for TTL after finished
I0224 23:04:55.438817 1 shared_informer.go:262] Caches are synced for PV protection
I0224 23:04:55.439086 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0224 23:04:55.440280 1 shared_informer.go:262] Caches are synced for persistent volume
I0224 23:04:55.443186 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0224 23:04:55.444356 1 shared_informer.go:262] Caches are synced for ReplicationController
I0224 23:04:55.449576 1 shared_informer.go:262] Caches are synced for ephemeral
I0224 23:04:55.450788 1 shared_informer.go:262] Caches are synced for HPA
I0224 23:04:55.453687 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0224 23:04:55.455330 1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0224 23:04:55.458645 1 shared_informer.go:262] Caches are synced for crt configmap
I0224 23:04:55.462917 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0224 23:04:55.517128 1 shared_informer.go:262] Caches are synced for deployment
I0224 23:04:55.575113 1 shared_informer.go:262] Caches are synced for disruption
I0224 23:04:55.575168 1 disruption.go:371] Sending events to api server.
I0224 23:04:55.577734 1 shared_informer.go:262] Caches are synced for stateful set
I0224 23:04:55.634027 1 shared_informer.go:262] Caches are synced for cronjob
I0224 23:04:55.650174 1 shared_informer.go:262] Caches are synced for resource quota
I0224 23:04:55.702534 1 shared_informer.go:262] Caches are synced for resource quota
I0224 23:04:56.121439 1 shared_informer.go:262] Caches are synced for garbage collector
I0224 23:04:56.150139 1 shared_informer.go:262] Caches are synced for garbage collector
I0224 23:04:56.150195 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [f11356fcbdbef1d1e6a1d18bd744524294c55345001f531ec36dc92f5f4a678d] <==
* I0224 23:05:26.221115 1 node.go:163] Successfully retrieved node IP: 192.168.39.247
I0224 23:05:26.221205 1 server_others.go:138] "Detected node IP" address="192.168.39.247"
I0224 23:05:26.221236 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0224 23:05:26.271796 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0224 23:05:26.271840 1 server_others.go:206] "Using iptables Proxier"
I0224 23:05:26.272572 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0224 23:05:26.273558 1 server.go:661] "Version info" version="v1.24.4"
I0224 23:05:26.273596 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0224 23:05:26.274840 1 config.go:317] "Starting service config controller"
I0224 23:05:26.274883 1 shared_informer.go:255] Waiting for caches to sync for service config
I0224 23:05:26.274903 1 config.go:226] "Starting endpoint slice config controller"
I0224 23:05:26.274907 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0224 23:05:26.278313 1 config.go:444] "Starting node config controller"
I0224 23:05:26.278360 1 shared_informer.go:255] Waiting for caches to sync for node config
I0224 23:05:26.375097 1 shared_informer.go:262] Caches are synced for service config
I0224 23:05:26.375521 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0224 23:05:26.378562 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [3c7d342b788c9fa4a23cf598a78b10d0da2ddc75318ee0690972d434cfc369df] <==
* I0224 23:05:08.817512 1 serving.go:348] Generated self-signed cert in-memory
I0224 23:05:09.035221 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
I0224 23:05:09.035242 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0224 23:05:09.039852 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0224 23:05:09.040377 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0224 23:05:09.040538 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0224 23:05:09.040719 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0224 23:05:09.040794 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0224 23:05:09.040915 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0224 23:05:09.040965 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0224 23:05:09.043064 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0224 23:05:09.140929 1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0224 23:05:09.141299 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0224 23:05:09.141333 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Fri 2023-02-24 23:03:49 UTC, ends at Fri 2023-02-24 23:06:11 UTC. --
Feb 24 23:04:52 test-preload-034636 kubelet[822]: E0224 23:04:52.023879 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6d4b75cb6d-tv2qr" podUID=6a602c52-853f-40b4-abf9-5bfe9edbc0c7
Feb 24 23:04:56 test-preload-034636 kubelet[822]: E0224 23:04:56.929578 822 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2658391219 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists" podSandboxID="5ba8dd9591a400f76736a9e214ef047b81a8046fb11f03fb4404525091158cb0"
Feb 24 23:04:56 test-preload-034636 kubelet[822]: E0224 23:04:56.930041 822 kuberuntime_manager.go:905] container &Container{Name:kube-scheduler,Image:k8s.gcr.io/kube-scheduler:v1.24.4,Command:[kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=false],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/scheduler.conf,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThre
shold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},} start failed in pod kube-scheduler-test-preload-034636_kube-system(0421fbbc1c8fa8d8e93fcf4d34dc87f8): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2658391219 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists
Feb 24 23:04:56 test-preload-034636 kubelet[822]: E0224 23:04:56.930153 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2658391219 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists\"" pod="kube-system/kube-scheduler-test-preload-034636" podUID=0421fbbc1c8fa8d8e93fcf4d34dc87f8
Feb 24 23:04:58 test-preload-034636 kubelet[822]: E0224 23:04:58.905397 822 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9pnfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(7619b280-6907-4244-a022-
e385bf2c2712): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Feb 24 23:04:58 test-preload-034636 kubelet[822]: E0224 23:04:58.906015 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID=7619b280-6907-4244-a022-e385bf2c2712
Feb 24 23:05:00 test-preload-034636 kubelet[822]: E0224 23:05:00.018206 822 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-a
ccess-jpgfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-54nk7_kube-system(24bfee09-fc7f-432b-b322-64cb6a2442a0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Feb 24 23:05:00 test-preload-034636 kubelet[822]: E0224 23:05:00.018250 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-54nk7" podUID=24bfee09-fc7f-432b-b322-64cb6a2442a0
Feb 24 23:05:00 test-preload-034636 kubelet[822]: E0224 23:05:00.042287 822 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-a
ccess-jpgfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-54nk7_kube-system(24bfee09-fc7f-432b-b322-64cb6a2442a0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Feb 24 23:05:00 test-preload-034636 kubelet[822]: E0224 23:05:00.042325 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-54nk7" podUID=24bfee09-fc7f-432b-b322-64cb6a2442a0
Feb 24 23:05:02 test-preload-034636 kubelet[822]: E0224 23:05:02.905900 822 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nn9wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-tv2qr_kube-system(6a602c52-853f-40b4-abf9-5bfe9edbc0c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Feb 24 23:05:02 test-preload-034636 kubelet[822]: E0224 23:05:02.906317 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6d4b75cb6d-tv2qr" podUID=6a602c52-853f-40b4-abf9-5bfe9edbc0c7
Feb 24 23:05:10 test-preload-034636 kubelet[822]: E0224 23:05:10.922700 822 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1806697971 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists" podSandboxID="144c414e8d5c00f3aca76a8cd27c9db682c5d45b907bd66c63d7c93a7346410c"
Feb 24 23:05:10 test-preload-034636 kubelet[822]: E0224 23:05:10.924070 822 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-a
ccess-jpgfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-54nk7_kube-system(24bfee09-fc7f-432b-b322-64cb6a2442a0): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1806697971 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists
Feb 24 23:05:10 test-preload-034636 kubelet[822]: E0224 23:05:10.924198 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1806697971 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists\"" pod="kube-system/kube-proxy-54nk7" podUID=24bfee09-fc7f-432b-b322-64cb6a2442a0
Feb 24 23:05:16 test-preload-034636 kubelet[822]: E0224 23:05:16.926717 822 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1776606681 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists" podSandboxID="2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0"
Feb 24 23:05:16 test-preload-034636 kubelet[822]: E0224 23:05:16.927387 822 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nn9wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-tv2qr_kube-system(6a602c52-853f-40b4-abf9-5bfe9edbc0c7): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1776606681 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists
Feb 24 23:05:16 test-preload-034636 kubelet[822]: E0224 23:05:16.929087 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1776606681 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists\"" pod="kube-system/coredns-6d4b75cb6d-tv2qr" podUID=6a602c52-853f-40b4-abf9-5bfe9edbc0c7
Feb 24 23:05:29 test-preload-034636 kubelet[822]: E0224 23:05:29.922523 822 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-819311406 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists" podSandboxID="2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0"
Feb 24 23:05:29 test-preload-034636 kubelet[822]: E0224 23:05:29.923303 822 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nn9wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-tv2qr_kube-system(6a602c52-853f-40b4-abf9-5bfe9edbc0c7): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-819311406 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists
Feb 24 23:05:29 test-preload-034636 kubelet[822]: E0224 23:05:29.923530 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-819311406 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists\"" pod="kube-system/coredns-6d4b75cb6d-tv2qr" podUID=6a602c52-853f-40b4-abf9-5bfe9edbc0c7
Feb 24 23:05:42 test-preload-034636 kubelet[822]: I0224 23:05:42.161553 822 scope.go:110] "RemoveContainer" containerID="441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6"
Feb 24 23:05:43 test-preload-034636 kubelet[822]: E0224 23:05:43.949052 822 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2024837891 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists" podSandboxID="2b3639487061654c7998f64d47f45dda3d1d2c0a9e929fa1daf799e0021c16d0"
Feb 24 23:05:43 test-preload-034636 kubelet[822]: E0224 23:05:43.949217 822 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nn9wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-tv2qr_kube-system(6a602c52-853f-40b4-abf9-5bfe9edbc0c7): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2024837891 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists
Feb 24 23:05:43 test-preload-034636 kubelet[822]: E0224 23:05:43.949260 822 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2024837891 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists\"" pod="kube-system/coredns-6d4b75cb6d-tv2qr" podUID=6a602c52-853f-40b4-abf9-5bfe9edbc0c7
*
* ==> storage-provisioner [441cc62483e6cb10be7470d61b5dacdf9677407f8be39a7518b28f562bbe6db6] <==
* I0224 23:05:12.070875 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0224 23:05:42.092775 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
*
* ==> storage-provisioner [6849df7b147741b2cd23147a4fb068980d49dc742be4daa8ef5d32c590a329ae] <==
* I0224 23:05:42.331433 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0224 23:05:42.360895 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0224 23:05:42.361887 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0224 23:05:59.813780 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0224 23:05:59.814373 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-034636_96ea9232-a36d-4041-81f5-86310cde0828!
I0224 23:05:59.815868 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"08bc8353-7794-4f71-9920-c8a3bbb54801", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-034636_96ea9232-a36d-4041-81f5-86310cde0828 became leader
I0224 23:05:59.915554 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-034636_96ea9232-a36d-4041-81f5-86310cde0828!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-034636 -n test-preload-034636
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-034636 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-034636" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-034636
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-034636: (1.284872952s)
--- FAIL: TestPreload (387.28s)