=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-380460 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0315 20:40:30.347850 11091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/ingress-addon-legacy-207828/client.crt: no such file or directory
E0315 20:40:46.237267 11091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/functional-023194/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-380460 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (1m21.533205144s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-380460 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-380460 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.151968554s)
preload_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p test-preload-380460
E0315 20:41:29.065145 11091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/addons-338495/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-380460: (1m31.511345864s)
preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-380460 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd
E0315 20:42:43.190432 11091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/functional-023194/client.crt: no such file or directory
E0315 20:43:33.393026 11091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/ingress-addon-legacy-207828/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-380460 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd: (2m25.814423617s)
preload_test.go:80: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-380460 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.4 6cab9d1bed1be 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.4 1f99cb6da9a82 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-scheduler v1.24.4 03fa22539fc1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-03-15 20:45:08.920117398 +0000 UTC m=+3116.496448311
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-380460 -n test-preload-380460
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-380460 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-380460 logs -n 25: (1.089164331s)
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-100078 ssh -n | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:15 UTC | 15 Mar 23 20:15 UTC |
| | multinode-100078-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-100078 ssh -n multinode-100078 sudo cat | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:15 UTC | 15 Mar 23 20:15 UTC |
| | /home/docker/cp-test_multinode-100078-m03_multinode-100078.txt | | | | | |
| cp | multinode-100078 cp multinode-100078-m03:/home/docker/cp-test.txt | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:15 UTC | 15 Mar 23 20:15 UTC |
| | multinode-100078-m02:/home/docker/cp-test_multinode-100078-m03_multinode-100078-m02.txt | | | | | |
| ssh | multinode-100078 ssh -n | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:15 UTC | 15 Mar 23 20:15 UTC |
| | multinode-100078-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-100078 ssh -n multinode-100078-m02 sudo cat | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:15 UTC | 15 Mar 23 20:15 UTC |
| | /home/docker/cp-test_multinode-100078-m03_multinode-100078-m02.txt | | | | | |
| node | multinode-100078 node stop m03 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:15 UTC | 15 Mar 23 20:15 UTC |
| node | multinode-100078 node start | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:15 UTC | 15 Mar 23 20:17 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-100078 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:17 UTC | |
| stop | -p multinode-100078 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:17 UTC | 15 Mar 23 20:20 UTC |
| start | -p multinode-100078 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:20 UTC | 15 Mar 23 20:30 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-100078 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:30 UTC | |
| node | multinode-100078 node delete | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:30 UTC | 15 Mar 23 20:30 UTC |
| | m03 | | | | | |
| stop | multinode-100078 stop | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:30 UTC | 15 Mar 23 20:33 UTC |
| start | -p multinode-100078 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:33 UTC | 15 Mar 23 20:38 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-100078 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:38 UTC | |
| start | -p multinode-100078-m02 | multinode-100078-m02 | jenkins | v1.29.0 | 15 Mar 23 20:38 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-100078-m03 | multinode-100078-m03 | jenkins | v1.29.0 | 15 Mar 23 20:38 UTC | 15 Mar 23 20:39 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-100078 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:39 UTC | |
| delete | -p multinode-100078-m03 | multinode-100078-m03 | jenkins | v1.29.0 | 15 Mar 23 20:39 UTC | 15 Mar 23 20:39 UTC |
| delete | -p multinode-100078 | multinode-100078 | jenkins | v1.29.0 | 15 Mar 23 20:39 UTC | 15 Mar 23 20:39 UTC |
| start | -p test-preload-380460 | test-preload-380460 | jenkins | v1.29.0 | 15 Mar 23 20:39 UTC | 15 Mar 23 20:41 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-380460 | test-preload-380460 | jenkins | v1.29.0 | 15 Mar 23 20:41 UTC | 15 Mar 23 20:41 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| stop | -p test-preload-380460 | test-preload-380460 | jenkins | v1.29.0 | 15 Mar 23 20:41 UTC | 15 Mar 23 20:42 UTC |
| start | -p test-preload-380460 | test-preload-380460 | jenkins | v1.29.0 | 15 Mar 23 20:42 UTC | 15 Mar 23 20:45 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p test-preload-380460 -- sudo | test-preload-380460 | jenkins | v1.29.0 | 15 Mar 23 20:45 UTC | 15 Mar 23 20:45 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/15 20:42:42
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.20.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0315 20:42:42.925791 26255 out.go:296] Setting OutFile to fd 1 ...
I0315 20:42:42.926166 26255 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0315 20:42:42.926209 26255 out.go:309] Setting ErrFile to fd 2...
I0315 20:42:42.926219 26255 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0315 20:42:42.926712 26255 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16056-4029/.minikube/bin
I0315 20:42:42.927321 26255 out.go:303] Setting JSON to false
I0315 20:42:42.928199 26255 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5108,"bootTime":1678907855,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0315 20:42:42.928261 26255 start.go:135] virtualization: kvm guest
I0315 20:42:42.932217 26255 out.go:177] * [test-preload-380460] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0315 20:42:42.934045 26255 out.go:177] - MINIKUBE_LOCATION=16056
I0315 20:42:42.934001 26255 notify.go:220] Checking for updates...
I0315 20:42:42.935864 26255 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0315 20:42:42.937736 26255 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16056-4029/kubeconfig
I0315 20:42:42.939546 26255 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16056-4029/.minikube
I0315 20:42:42.941354 26255 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0315 20:42:42.943006 26255 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0315 20:42:42.945223 26255 config.go:182] Loaded profile config "test-preload-380460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0315 20:42:42.945754 26255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0315 20:42:42.945824 26255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 20:42:42.959494 26255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35997
I0315 20:42:42.959896 26255 main.go:141] libmachine: () Calling .GetVersion
I0315 20:42:42.960605 26255 main.go:141] libmachine: Using API Version 1
I0315 20:42:42.960632 26255 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 20:42:42.960985 26255 main.go:141] libmachine: () Calling .GetMachineName
I0315 20:42:42.961299 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:42:42.963568 26255 out.go:177] * Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
I0315 20:42:42.965415 26255 driver.go:365] Setting default libvirt URI to qemu:///system
I0315 20:42:42.965710 26255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0315 20:42:42.965748 26255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 20:42:42.979315 26255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
I0315 20:42:42.979683 26255 main.go:141] libmachine: () Calling .GetVersion
I0315 20:42:42.980123 26255 main.go:141] libmachine: Using API Version 1
I0315 20:42:42.980151 26255 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 20:42:42.980427 26255 main.go:141] libmachine: () Calling .GetMachineName
I0315 20:42:42.980605 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:42:43.014244 26255 out.go:177] * Using the kvm2 driver based on existing profile
I0315 20:42:43.015890 26255 start.go:296] selected driver: kvm2
I0315 20:42:43.015918 26255 start.go:857] validating driver "kvm2" against &{Name:test-preload-380460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15973/minikube-v1.29.0-1678210391-15973-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-380460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 20:42:43.016016 26255 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0315 20:42:43.016697 26255 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0315 20:42:43.016790 26255 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16056-4029/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0315 20:42:43.030818 26255 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0315 20:42:43.031134 26255 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0315 20:42:43.031165 26255 cni.go:84] Creating CNI manager for ""
I0315 20:42:43.031177 26255 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0315 20:42:43.031189 26255 start_flags.go:319] config:
{Name:test-preload-380460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15973/minikube-v1.29.0-1678210391-15973-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-380460 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 20:42:43.031286 26255 iso.go:125] acquiring lock: {Name:mkb89eccb59a276c6f5a47d7079f1c8192cfa257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0315 20:42:43.034539 26255 out.go:177] * Starting control plane node test-preload-380460 in cluster test-preload-380460
I0315 20:42:43.036043 26255 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0315 20:42:43.500736 26255 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0315 20:42:43.500774 26255 cache.go:57] Caching tarball of preloaded images
I0315 20:42:43.500939 26255 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0315 20:42:43.503456 26255 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0315 20:42:43.505044 26255 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0315 20:42:43.617680 26255 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/16056-4029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0315 20:43:02.096260 26255 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0315 20:43:02.096359 26255 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16056-4029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0315 20:43:02.957898 26255 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0315 20:43:02.958027 26255 profile.go:148] Saving config to /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/config.json ...
I0315 20:43:02.958218 26255 cache.go:193] Successfully downloaded all kic artifacts
I0315 20:43:02.958254 26255 start.go:364] acquiring machines lock for test-preload-380460: {Name:mk6e542d9c85a9ea234140b2f76687f93f490c84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0315 20:43:02.958310 26255 start.go:368] acquired machines lock for "test-preload-380460" in 39.805µs
I0315 20:43:02.958324 26255 start.go:96] Skipping create...Using existing machine configuration
I0315 20:43:02.958331 26255 fix.go:55] fixHost starting:
I0315 20:43:02.958642 26255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0315 20:43:02.958679 26255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 20:43:02.972698 26255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39137
I0315 20:43:02.973101 26255 main.go:141] libmachine: () Calling .GetVersion
I0315 20:43:02.973539 26255 main.go:141] libmachine: Using API Version 1
I0315 20:43:02.973564 26255 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 20:43:02.973924 26255 main.go:141] libmachine: () Calling .GetMachineName
I0315 20:43:02.974115 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:43:02.974289 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetState
I0315 20:43:02.975836 26255 fix.go:103] recreateIfNeeded on test-preload-380460: state=Stopped err=<nil>
I0315 20:43:02.975859 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
W0315 20:43:02.976028 26255 fix.go:129] unexpected machine state, will restart: <nil>
I0315 20:43:02.979442 26255 out.go:177] * Restarting existing kvm2 VM for "test-preload-380460" ...
I0315 20:43:02.980889 26255 main.go:141] libmachine: (test-preload-380460) Calling .Start
I0315 20:43:02.981031 26255 main.go:141] libmachine: (test-preload-380460) Ensuring networks are active...
I0315 20:43:02.981799 26255 main.go:141] libmachine: (test-preload-380460) Ensuring network default is active
I0315 20:43:02.982120 26255 main.go:141] libmachine: (test-preload-380460) Ensuring network mk-test-preload-380460 is active
I0315 20:43:02.982437 26255 main.go:141] libmachine: (test-preload-380460) Getting domain xml...
I0315 20:43:02.983154 26255 main.go:141] libmachine: (test-preload-380460) Creating domain...
I0315 20:43:04.185728 26255 main.go:141] libmachine: (test-preload-380460) Waiting to get IP...
I0315 20:43:04.186562 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:04.186915 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:04.187014 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:04.186925 26300 retry.go:31] will retry after 261.145676ms: waiting for machine to come up
I0315 20:43:04.449272 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:04.449765 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:04.449798 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:04.449718 26300 retry.go:31] will retry after 363.773228ms: waiting for machine to come up
I0315 20:43:04.815215 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:04.815665 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:04.815701 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:04.815608 26300 retry.go:31] will retry after 296.611212ms: waiting for machine to come up
I0315 20:43:05.114114 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:05.114546 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:05.114574 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:05.114523 26300 retry.go:31] will retry after 595.344027ms: waiting for machine to come up
I0315 20:43:05.711280 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:05.711699 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:05.711727 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:05.711641 26300 retry.go:31] will retry after 622.860723ms: waiting for machine to come up
I0315 20:43:06.336695 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:06.337144 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:06.337173 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:06.337090 26300 retry.go:31] will retry after 846.229777ms: waiting for machine to come up
I0315 20:43:07.184996 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:07.185287 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:07.185312 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:07.185235 26300 retry.go:31] will retry after 718.211933ms: waiting for machine to come up
I0315 20:43:07.904643 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:07.905116 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:07.905148 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:07.905050 26300 retry.go:31] will retry after 1.480665271s: waiting for machine to come up
I0315 20:43:09.387712 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:09.388070 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:09.388092 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:09.388020 26300 retry.go:31] will retry after 1.547762099s: waiting for machine to come up
I0315 20:43:10.937744 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:10.938164 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:10.938216 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:10.938138 26300 retry.go:31] will retry after 2.19061498s: waiting for machine to come up
I0315 20:43:13.130276 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:13.130727 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:13.130757 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:13.130652 26300 retry.go:31] will retry after 2.420290093s: waiting for machine to come up
I0315 20:43:15.553405 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:15.553804 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:15.553838 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:15.553749 26300 retry.go:31] will retry after 3.094307489s: waiting for machine to come up
I0315 20:43:18.649976 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:18.650429 26255 main.go:141] libmachine: (test-preload-380460) DBG | unable to find current IP address of domain test-preload-380460 in network mk-test-preload-380460
I0315 20:43:18.650466 26255 main.go:141] libmachine: (test-preload-380460) DBG | I0315 20:43:18.650390 26300 retry.go:31] will retry after 3.24900164s: waiting for machine to come up
I0315 20:43:21.902740 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:21.903216 26255 main.go:141] libmachine: (test-preload-380460) Found IP for machine: 192.168.39.81
I0315 20:43:21.903242 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has current primary IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:21.903252 26255 main.go:141] libmachine: (test-preload-380460) Reserving static IP address...
I0315 20:43:21.903705 26255 main.go:141] libmachine: (test-preload-380460) Reserved static IP address: 192.168.39.81
I0315 20:43:21.903741 26255 main.go:141] libmachine: (test-preload-380460) Waiting for SSH to be available...
I0315 20:43:21.903766 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "test-preload-380460", mac: "52:54:00:c8:16:f4", ip: "192.168.39.81"} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:21.903791 26255 main.go:141] libmachine: (test-preload-380460) DBG | skip adding static IP to network mk-test-preload-380460 - found existing host DHCP lease matching {name: "test-preload-380460", mac: "52:54:00:c8:16:f4", ip: "192.168.39.81"}
I0315 20:43:21.903801 26255 main.go:141] libmachine: (test-preload-380460) DBG | Getting to WaitForSSH function...
I0315 20:43:21.905601 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:21.905872 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:21.905906 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:21.906020 26255 main.go:141] libmachine: (test-preload-380460) DBG | Using SSH client type: external
I0315 20:43:21.906047 26255 main.go:141] libmachine: (test-preload-380460) DBG | Using SSH private key: /home/jenkins/minikube-integration/16056-4029/.minikube/machines/test-preload-380460/id_rsa (-rw-------)
I0315 20:43:21.906078 26255 main.go:141] libmachine: (test-preload-380460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16056-4029/.minikube/machines/test-preload-380460/id_rsa -p 22] /usr/bin/ssh <nil>}
I0315 20:43:21.906094 26255 main.go:141] libmachine: (test-preload-380460) DBG | About to run SSH command:
I0315 20:43:21.906110 26255 main.go:141] libmachine: (test-preload-380460) DBG | exit 0
I0315 20:43:21.999985 26255 main.go:141] libmachine: (test-preload-380460) DBG | SSH cmd err, output: <nil>:
I0315 20:43:22.000343 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetConfigRaw
I0315 20:43:22.001104 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetIP
I0315 20:43:22.003703 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.004066 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.004103 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.004339 26255 profile.go:148] Saving config to /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/config.json ...
I0315 20:43:22.004543 26255 machine.go:88] provisioning docker machine ...
I0315 20:43:22.004561 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:43:22.004739 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetMachineName
I0315 20:43:22.004934 26255 buildroot.go:166] provisioning hostname "test-preload-380460"
I0315 20:43:22.004956 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetMachineName
I0315 20:43:22.005154 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:43:22.007739 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.008019 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.008052 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.008196 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:43:22.008380 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.008555 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.008693 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:43:22.008852 26255 main.go:141] libmachine: Using SSH client type: native
I0315 20:43:22.009498 26255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x176ec60] 0x1771e40 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I0315 20:43:22.009522 26255 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-380460 && echo "test-preload-380460" | sudo tee /etc/hostname
I0315 20:43:22.153182 26255 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-380460
I0315 20:43:22.153221 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:43:22.155837 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.156166 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.156202 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.156337 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:43:22.156579 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.156796 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.156951 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:43:22.157134 26255 main.go:141] libmachine: Using SSH client type: native
I0315 20:43:22.157536 26255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x176ec60] 0x1771e40 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I0315 20:43:22.157553 26255 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-380460' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-380460/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-380460' | sudo tee -a /etc/hosts;
fi
fi
I0315 20:43:22.296214 26255 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0315 20:43:22.296247 26255 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16056-4029/.minikube CaCertPath:/home/jenkins/minikube-integration/16056-4029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16056-4029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16056-4029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16056-4029/.minikube}
I0315 20:43:22.296270 26255 buildroot.go:174] setting up certificates
I0315 20:43:22.296283 26255 provision.go:83] configureAuth start
I0315 20:43:22.296315 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetMachineName
I0315 20:43:22.296598 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetIP
I0315 20:43:22.299210 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.299596 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.299628 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.299731 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:43:22.302032 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.302446 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.302481 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.302646 26255 provision.go:138] copyHostCerts
I0315 20:43:22.302714 26255 exec_runner.go:144] found /home/jenkins/minikube-integration/16056-4029/.minikube/key.pem, removing ...
I0315 20:43:22.302727 26255 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16056-4029/.minikube/key.pem
I0315 20:43:22.302807 26255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16056-4029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16056-4029/.minikube/key.pem (1675 bytes)
I0315 20:43:22.302909 26255 exec_runner.go:144] found /home/jenkins/minikube-integration/16056-4029/.minikube/ca.pem, removing ...
I0315 20:43:22.302919 26255 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16056-4029/.minikube/ca.pem
I0315 20:43:22.302959 26255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16056-4029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16056-4029/.minikube/ca.pem (1078 bytes)
I0315 20:43:22.303031 26255 exec_runner.go:144] found /home/jenkins/minikube-integration/16056-4029/.minikube/cert.pem, removing ...
I0315 20:43:22.303041 26255 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16056-4029/.minikube/cert.pem
I0315 20:43:22.303076 26255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16056-4029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16056-4029/.minikube/cert.pem (1123 bytes)
I0315 20:43:22.303138 26255 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16056-4029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16056-4029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16056-4029/.minikube/certs/ca-key.pem org=jenkins.test-preload-380460 san=[192.168.39.81 192.168.39.81 localhost 127.0.0.1 minikube test-preload-380460]
I0315 20:43:22.397554 26255 provision.go:172] copyRemoteCerts
I0315 20:43:22.397631 26255 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0315 20:43:22.397660 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:43:22.399974 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.400271 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.400318 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.400511 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:43:22.400775 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.400997 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:43:22.401200 26255 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/machines/test-preload-380460/id_rsa Username:docker}
I0315 20:43:22.497993 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0315 20:43:22.519638 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0315 20:43:22.541066 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0315 20:43:22.563425 26255 provision.go:86] duration metric: configureAuth took 267.117686ms
I0315 20:43:22.563450 26255 buildroot.go:189] setting minikube options for container-runtime
I0315 20:43:22.563642 26255 config.go:182] Loaded profile config "test-preload-380460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0315 20:43:22.563659 26255 machine.go:91] provisioned docker machine in 559.104013ms
I0315 20:43:22.563668 26255 start.go:300] post-start starting for "test-preload-380460" (driver="kvm2")
I0315 20:43:22.563677 26255 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0315 20:43:22.563708 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:43:22.564010 26255 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0315 20:43:22.564037 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:43:22.566381 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.566698 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.566734 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.566850 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:43:22.567027 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.567192 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:43:22.567364 26255 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/machines/test-preload-380460/id_rsa Username:docker}
I0315 20:43:22.662322 26255 ssh_runner.go:195] Run: cat /etc/os-release
I0315 20:43:22.666471 26255 info.go:137] Remote host: Buildroot 2021.02.12
I0315 20:43:22.666494 26255 filesync.go:126] Scanning /home/jenkins/minikube-integration/16056-4029/.minikube/addons for local assets ...
I0315 20:43:22.666559 26255 filesync.go:126] Scanning /home/jenkins/minikube-integration/16056-4029/.minikube/files for local assets ...
I0315 20:43:22.666627 26255 filesync.go:149] local asset: /home/jenkins/minikube-integration/16056-4029/.minikube/files/etc/ssl/certs/110912.pem -> 110912.pem in /etc/ssl/certs
I0315 20:43:22.666709 26255 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0315 20:43:22.675670 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/files/etc/ssl/certs/110912.pem --> /etc/ssl/certs/110912.pem (1708 bytes)
I0315 20:43:22.697892 26255 start.go:303] post-start completed in 134.208947ms
I0315 20:43:22.697915 26255 fix.go:57] fixHost completed within 19.739583804s
I0315 20:43:22.697941 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:43:22.700559 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.700890 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.700928 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.701137 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:43:22.701331 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.701502 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.701639 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:43:22.701788 26255 main.go:141] libmachine: Using SSH client type: native
I0315 20:43:22.702168 26255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x176ec60] 0x1771e40 <nil> [] 0s} 192.168.39.81 22 <nil> <nil>}
I0315 20:43:22.702179 26255 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0315 20:43:22.833045 26255 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678913002.781029022
I0315 20:43:22.833071 26255 fix.go:207] guest clock: 1678913002.781029022
I0315 20:43:22.833081 26255 fix.go:220] Guest: 2023-03-15 20:43:22.781029022 +0000 UTC Remote: 2023-03-15 20:43:22.697920227 +0000 UTC m=+39.810990111 (delta=83.108795ms)
I0315 20:43:22.833117 26255 fix.go:191] guest clock delta is within tolerance: 83.108795ms
I0315 20:43:22.833125 26255 start.go:83] releasing machines lock for "test-preload-380460", held for 19.874804086s
I0315 20:43:22.833151 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:43:22.833422 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetIP
I0315 20:43:22.836000 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.836323 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.836352 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.836511 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:43:22.836979 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:43:22.837158 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:43:22.837248 26255 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0315 20:43:22.837285 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:43:22.837376 26255 ssh_runner.go:195] Run: cat /version.json
I0315 20:43:22.837403 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:43:22.839970 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.840010 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.840429 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.840462 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.840507 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:22.840549 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:22.840657 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:43:22.840793 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:43:22.840858 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.840950 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:43:22.841029 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:43:22.841087 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:43:22.841331 26255 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/machines/test-preload-380460/id_rsa Username:docker}
I0315 20:43:22.841359 26255 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/machines/test-preload-380460/id_rsa Username:docker}
I0315 20:43:22.944413 26255 ssh_runner.go:195] Run: systemctl --version
I0315 20:43:22.950137 26255 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0315 20:43:22.955846 26255 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0315 20:43:22.955908 26255 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0315 20:43:22.973638 26255 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0315 20:43:22.973660 26255 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0315 20:43:22.973738 26255 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 20:43:27.002421 26255 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.028662387s)
I0315 20:43:27.002560 26255 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0315 20:43:27.002627 26255 ssh_runner.go:195] Run: which lz4
I0315 20:43:27.006688 26255 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0315 20:43:27.011005 26255 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0315 20:43:27.011031 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0315 20:43:28.666183 26255 containerd.go:551] Took 1.659534 seconds to copy over tarball
I0315 20:43:28.666257 26255 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0315 20:43:31.758528 26255 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.092245059s)
I0315 20:43:31.758554 26255 containerd.go:558] Took 3.092343 seconds to extract the tarball
I0315 20:43:31.758565 26255 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0315 20:43:31.798375 26255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 20:43:31.896235 26255 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0315 20:43:31.914758 26255 start.go:485] detecting cgroup driver to use...
I0315 20:43:31.914838 26255 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0315 20:43:34.610185 26255 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.695322424s)
I0315 20:43:34.610255 26255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0315 20:43:34.622998 26255 docker.go:186] disabling cri-docker service (if available) ...
I0315 20:43:34.623053 26255 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0315 20:43:34.635443 26255 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0315 20:43:34.647554 26255 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0315 20:43:34.746509 26255 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0315 20:43:34.860372 26255 docker.go:202] disabling docker service ...
I0315 20:43:34.860443 26255 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0315 20:43:34.874080 26255 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0315 20:43:34.886001 26255 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0315 20:43:34.998991 26255 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0315 20:43:35.105448 26255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0315 20:43:35.118991 26255 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0315 20:43:35.138102 26255 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
I0315 20:43:35.147450 26255 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0315 20:43:35.156710 26255 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0315 20:43:35.156771 26255 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0315 20:43:35.165961 26255 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0315 20:43:35.175111 26255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0315 20:43:35.183926 26255 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0315 20:43:35.192890 26255 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0315 20:43:35.202460 26255 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0315 20:43:35.211838 26255 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0315 20:43:35.220157 26255 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0315 20:43:35.220264 26255 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0315 20:43:35.232876 26255 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0315 20:43:35.242702 26255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 20:43:35.349357 26255 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0315 20:43:35.376668 26255 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
I0315 20:43:35.376744 26255 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0315 20:43:35.382247 26255 retry.go:31] will retry after 833.265438ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0315 20:43:36.216311 26255 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0315 20:43:36.221978 26255 start.go:553] Will wait 60s for crictl version
I0315 20:43:36.222026 26255 ssh_runner.go:195] Run: which crictl
I0315 20:43:36.225736 26255 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0315 20:43:36.254290 26255 start.go:569] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.19
RuntimeApiVersion: v1alpha2
I0315 20:43:36.254352 26255 ssh_runner.go:195] Run: containerd --version
I0315 20:43:36.283825 26255 ssh_runner.go:195] Run: containerd --version
I0315 20:43:36.314655 26255 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.19 ...
I0315 20:43:36.316716 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetIP
I0315 20:43:36.319759 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:36.320083 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:43:36.320105 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:43:36.320366 26255 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0315 20:43:36.324334 26255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0315 20:43:36.335962 26255 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0315 20:43:36.336046 26255 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 20:43:36.369064 26255 containerd.go:608] all images are preloaded for containerd runtime.
I0315 20:43:36.369095 26255 containerd.go:522] Images already preloaded, skipping extraction
I0315 20:43:36.369152 26255 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 20:43:36.399185 26255 containerd.go:608] all images are preloaded for containerd runtime.
I0315 20:43:36.399206 26255 cache_images.go:84] Images are preloaded, skipping loading
I0315 20:43:36.399261 26255 ssh_runner.go:195] Run: sudo crictl info
I0315 20:43:36.433751 26255 cni.go:84] Creating CNI manager for ""
I0315 20:43:36.433780 26255 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0315 20:43:36.433793 26255 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0315 20:43:36.433814 26255 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-380460 NodeName:test-preload-380460 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0315 20:43:36.433950 26255 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.81
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-380460"
kubeletExtraArgs:
node-ip: 192.168.39.81
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0315 20:43:36.434047 26255 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-380460 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-380460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0315 20:43:36.434108 26255 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0315 20:43:36.443780 26255 binaries.go:44] Found k8s binaries, skipping transfer
I0315 20:43:36.443849 26255 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0315 20:43:36.453210 26255 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (483 bytes)
I0315 20:43:36.468873 26255 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0315 20:43:36.484962 26255 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
I0315 20:43:36.501548 26255 ssh_runner.go:195] Run: grep 192.168.39.81 control-plane.minikube.internal$ /etc/hosts
I0315 20:43:36.505189 26255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.81 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0315 20:43:36.516596 26255 certs.go:56] Setting up /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460 for IP: 192.168.39.81
I0315 20:43:36.516633 26255 certs.go:186] acquiring lock for shared ca certs: {Name:mk8b1e892b6a9364935af34d441f97f6aa4de48b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 20:43:36.516790 26255 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16056-4029/.minikube/ca.key
I0315 20:43:36.516827 26255 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16056-4029/.minikube/proxy-client-ca.key
I0315 20:43:36.516889 26255 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/client.key
I0315 20:43:36.516952 26255 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/apiserver.key.42d444b6
I0315 20:43:36.516986 26255 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/proxy-client.key
I0315 20:43:36.517086 26255 certs.go:401] found cert: /home/jenkins/minikube-integration/16056-4029/.minikube/certs/home/jenkins/minikube-integration/16056-4029/.minikube/certs/11091.pem (1338 bytes)
W0315 20:43:36.517120 26255 certs.go:397] ignoring /home/jenkins/minikube-integration/16056-4029/.minikube/certs/home/jenkins/minikube-integration/16056-4029/.minikube/certs/11091_empty.pem, impossibly tiny 0 bytes
I0315 20:43:36.517133 26255 certs.go:401] found cert: /home/jenkins/minikube-integration/16056-4029/.minikube/certs/home/jenkins/minikube-integration/16056-4029/.minikube/certs/ca-key.pem (1679 bytes)
I0315 20:43:36.517164 26255 certs.go:401] found cert: /home/jenkins/minikube-integration/16056-4029/.minikube/certs/home/jenkins/minikube-integration/16056-4029/.minikube/certs/ca.pem (1078 bytes)
I0315 20:43:36.517186 26255 certs.go:401] found cert: /home/jenkins/minikube-integration/16056-4029/.minikube/certs/home/jenkins/minikube-integration/16056-4029/.minikube/certs/cert.pem (1123 bytes)
I0315 20:43:36.517217 26255 certs.go:401] found cert: /home/jenkins/minikube-integration/16056-4029/.minikube/certs/home/jenkins/minikube-integration/16056-4029/.minikube/certs/key.pem (1675 bytes)
I0315 20:43:36.517262 26255 certs.go:401] found cert: /home/jenkins/minikube-integration/16056-4029/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16056-4029/.minikube/files/etc/ssl/certs/110912.pem (1708 bytes)
I0315 20:43:36.517923 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0315 20:43:36.541488 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0315 20:43:36.564648 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0315 20:43:36.587667 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0315 20:43:36.610987 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0315 20:43:36.634654 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0315 20:43:36.659001 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0315 20:43:36.682271 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0315 20:43:36.705063 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/files/etc/ssl/certs/110912.pem --> /usr/share/ca-certificates/110912.pem (1708 bytes)
I0315 20:43:36.728003 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0315 20:43:36.751082 26255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16056-4029/.minikube/certs/11091.pem --> /usr/share/ca-certificates/11091.pem (1338 bytes)
I0315 20:43:36.773956 26255 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0315 20:43:36.789900 26255 ssh_runner.go:195] Run: openssl version
I0315 20:43:36.795570 26255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110912.pem && ln -fs /usr/share/ca-certificates/110912.pem /etc/ssl/certs/110912.pem"
I0315 20:43:36.806210 26255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110912.pem
I0315 20:43:36.810962 26255 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:00 /usr/share/ca-certificates/110912.pem
I0315 20:43:36.811022 26255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110912.pem
I0315 20:43:36.816618 26255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110912.pem /etc/ssl/certs/3ec20f2e.0"
I0315 20:43:36.826276 26255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0315 20:43:36.835671 26255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0315 20:43:36.840103 26255 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:54 /usr/share/ca-certificates/minikubeCA.pem
I0315 20:43:36.840152 26255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0315 20:43:36.845518 26255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0315 20:43:36.854701 26255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11091.pem && ln -fs /usr/share/ca-certificates/11091.pem /etc/ssl/certs/11091.pem"
I0315 20:43:36.863873 26255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11091.pem
I0315 20:43:36.868894 26255 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:00 /usr/share/ca-certificates/11091.pem
I0315 20:43:36.868953 26255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11091.pem
I0315 20:43:36.874739 26255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11091.pem /etc/ssl/certs/51391683.0"
I0315 20:43:36.884859 26255 kubeadm.go:401] StartCluster: {Name:test-preload-380460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15973/minikube-v1.29.0-1678210391-15973-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-380460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 20:43:36.884953 26255 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0315 20:43:36.885022 26255 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0315 20:43:36.916480 26255 cri.go:87] found id: ""
I0315 20:43:36.916561 26255 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0315 20:43:36.926331 26255 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0315 20:43:36.926351 26255 kubeadm.go:633] restartCluster start
I0315 20:43:36.926399 26255 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0315 20:43:36.935150 26255 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0315 20:43:36.935557 26255 kubeconfig.go:135] verify returned: extract IP: "test-preload-380460" does not appear in /home/jenkins/minikube-integration/16056-4029/kubeconfig
I0315 20:43:36.935652 26255 kubeconfig.go:146] "test-preload-380460" context is missing from /home/jenkins/minikube-integration/16056-4029/kubeconfig - will repair!
I0315 20:43:36.935866 26255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16056-4029/kubeconfig: {Name:mk00eb5b0ed86b7fe1dc3b258ff4a24f5f66bd05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 20:43:36.936523 26255 kapi.go:59] client config for test-preload-380460: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/client.crt", KeyFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/client.key", CAFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29d6de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0315 20:43:36.937291 26255 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0315 20:43:36.945896 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:36.945934 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:36.957303 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:37.458045 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:37.458127 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:37.469994 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:37.958398 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:37.958463 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:37.970291 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:38.457686 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:38.457773 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:38.468972 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:38.957525 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:38.957596 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:38.969122 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:39.457691 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:39.457788 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:39.469193 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:39.957707 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:39.957790 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:39.969336 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:40.457506 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:40.457586 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:40.469122 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:40.957656 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:40.957736 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:40.969232 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:41.457793 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:41.457864 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:41.469998 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:41.957547 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:41.957632 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:41.969297 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:42.457838 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:42.457969 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:42.469646 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:42.957441 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:42.957506 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:42.969056 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:43.457731 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:43.457811 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:43.470259 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:43.958023 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:43.958121 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:43.971012 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:44.457632 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:44.457721 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:44.469241 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:44.957773 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:44.957845 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:44.970465 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:45.458051 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:45.458118 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:45.471052 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:45.957571 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:45.957656 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:45.969283 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:46.457889 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:46.457963 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:46.469782 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:46.957448 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:46.957510 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:46.969089 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:46.969109 26255 api_server.go:165] Checking apiserver status ...
I0315 20:43:46.969145 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 20:43:46.979420 26255 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 20:43:46.979443 26255 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0315 20:43:46.979450 26255 kubeadm.go:1120] stopping kube-system containers ...
I0315 20:43:46.979460 26255 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0315 20:43:46.979494 26255 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0315 20:43:47.010684 26255 cri.go:87] found id: ""
I0315 20:43:47.010753 26255 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0315 20:43:47.026610 26255 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0315 20:43:47.035896 26255 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0315 20:43:47.035949 26255 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0315 20:43:47.045455 26255 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0315 20:43:47.045473 26255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0315 20:43:47.146176 26255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0315 20:43:47.841067 26255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0315 20:43:48.156000 26255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0315 20:43:48.257779 26255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0315 20:43:48.321774 26255 api_server.go:51] waiting for apiserver process to appear ...
I0315 20:43:48.321838 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:48.839836 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:49.340213 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:49.840074 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:50.340121 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:50.840069 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:51.339658 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:51.839835 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:52.339189 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:52.840210 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:53.340011 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:53.839443 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:54.339970 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:54.840217 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:55.339662 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:55.840195 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:56.339965 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:56.839429 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:57.339780 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:57.839596 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:58.339382 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:58.839933 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:59.339360 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:43:59.839356 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:44:00.339893 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:44:00.839290 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:44:01.339640 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:44:01.840212 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:44:01.852164 26255 api_server.go:71] duration metric: took 13.530393972s to wait for apiserver process to appear ...
I0315 20:44:01.852205 26255 api_server.go:87] waiting for apiserver healthz status ...
I0315 20:44:01.852219 26255 api_server.go:252] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
I0315 20:44:06.855350 26255 api_server.go:268] stopped: https://192.168.39.81:8443/healthz: Get "https://192.168.39.81:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 20:44:07.356381 26255 api_server.go:252] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
I0315 20:44:08.805365 26255 api_server.go:278] https://192.168.39.81:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0315 20:44:08.805398 26255 api_server.go:102] status: https://192.168.39.81:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 20:44:08.855555 26255 api_server.go:252] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
I0315 20:44:08.886444 26255 api_server.go:278] https://192.168.39.81:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 20:44:08.886480 26255 api_server.go:102] status: https://192.168.39.81:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 20:44:09.356043 26255 api_server.go:252] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
I0315 20:44:09.362905 26255 api_server.go:278] https://192.168.39.81:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 20:44:09.362936 26255 api_server.go:102] status: https://192.168.39.81:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 20:44:09.855558 26255 api_server.go:252] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
I0315 20:44:09.862561 26255 api_server.go:278] https://192.168.39.81:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 20:44:09.862582 26255 api_server.go:102] status: https://192.168.39.81:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 20:44:10.356348 26255 api_server.go:252] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
I0315 20:44:10.364201 26255 api_server.go:278] https://192.168.39.81:8443/healthz returned 200:
ok
I0315 20:44:10.373544 26255 api_server.go:140] control plane version: v1.24.4
I0315 20:44:10.373570 26255 api_server.go:130] duration metric: took 8.521355049s to wait for apiserver health ...
I0315 20:44:10.373579 26255 cni.go:84] Creating CNI manager for ""
I0315 20:44:10.373586 26255 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0315 20:44:10.376084 26255 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0315 20:44:10.378021 26255 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0315 20:44:10.388844 26255 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0315 20:44:10.413248 26255 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 20:44:10.421391 26255 system_pods.go:59] 7 kube-system pods found
I0315 20:44:10.421423 26255 system_pods.go:61] "coredns-6d4b75cb6d-drm2z" [9628590f-f582-47c4-a991-245405b5b610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0315 20:44:10.421432 26255 system_pods.go:61] "etcd-test-preload-380460" [7042ec67-2d5c-4ffd-b161-00e1788b9251] Running
I0315 20:44:10.421441 26255 system_pods.go:61] "kube-apiserver-test-preload-380460" [e20f9aa4-12fc-4ddd-bfd5-500f660d406b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0315 20:44:10.421448 26255 system_pods.go:61] "kube-controller-manager-test-preload-380460" [379cdcc3-36f6-4394-b3f2-8d205567d35c] Running
I0315 20:44:10.421455 26255 system_pods.go:61] "kube-proxy-6xvbn" [71ce35e9-9989-44ac-a354-77945d47533f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0315 20:44:10.421463 26255 system_pods.go:61] "kube-scheduler-test-preload-380460" [442a9383-ebcb-4dd4-a9f1-eb49b7a7267a] Running
I0315 20:44:10.421477 26255 system_pods.go:61] "storage-provisioner" [4657c086-4b92-47a0-9752-8fe4a418546e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0315 20:44:10.421486 26255 system_pods.go:74] duration metric: took 8.217215ms to wait for pod list to return data ...
I0315 20:44:10.421496 26255 node_conditions.go:102] verifying NodePressure condition ...
I0315 20:44:10.425882 26255 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0315 20:44:10.425911 26255 node_conditions.go:123] node cpu capacity is 2
I0315 20:44:10.425924 26255 node_conditions.go:105] duration metric: took 4.422945ms to run NodePressure ...
I0315 20:44:10.425943 26255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0315 20:44:10.663067 26255 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0315 20:44:10.667593 26255 kubeadm.go:784] kubelet initialised
I0315 20:44:10.667622 26255 kubeadm.go:785] duration metric: took 4.531575ms waiting for restarted kubelet to initialise ...
I0315 20:44:10.667631 26255 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 20:44:10.675060 26255 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace to be "Ready" ...
I0315 20:44:10.685597 26255 pod_ready.go:97] node "test-preload-380460" hosting pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:10.685623 26255 pod_ready.go:81] duration metric: took 10.534073ms waiting for pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace to be "Ready" ...
E0315 20:44:10.685633 26255 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-380460" hosting pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:10.685642 26255 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:44:10.693403 26255 pod_ready.go:97] node "test-preload-380460" hosting pod "etcd-test-preload-380460" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:10.693423 26255 pod_ready.go:81] duration metric: took 7.772404ms waiting for pod "etcd-test-preload-380460" in "kube-system" namespace to be "Ready" ...
E0315 20:44:10.693429 26255 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-380460" hosting pod "etcd-test-preload-380460" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:10.693436 26255 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:44:10.699294 26255 pod_ready.go:97] node "test-preload-380460" hosting pod "kube-apiserver-test-preload-380460" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:10.699315 26255 pod_ready.go:81] duration metric: took 5.871802ms waiting for pod "kube-apiserver-test-preload-380460" in "kube-system" namespace to be "Ready" ...
E0315 20:44:10.699323 26255 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-380460" hosting pod "kube-apiserver-test-preload-380460" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:10.699332 26255 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:44:10.817066 26255 pod_ready.go:97] node "test-preload-380460" hosting pod "kube-controller-manager-test-preload-380460" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:10.817090 26255 pod_ready.go:81] duration metric: took 117.746499ms waiting for pod "kube-controller-manager-test-preload-380460" in "kube-system" namespace to be "Ready" ...
E0315 20:44:10.817099 26255 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-380460" hosting pod "kube-controller-manager-test-preload-380460" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:10.817107 26255 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6xvbn" in "kube-system" namespace to be "Ready" ...
I0315 20:44:11.217683 26255 pod_ready.go:97] node "test-preload-380460" hosting pod "kube-proxy-6xvbn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:11.217705 26255 pod_ready.go:81] duration metric: took 400.591457ms waiting for pod "kube-proxy-6xvbn" in "kube-system" namespace to be "Ready" ...
E0315 20:44:11.217713 26255 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-380460" hosting pod "kube-proxy-6xvbn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:11.217722 26255 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:44:11.616587 26255 pod_ready.go:97] node "test-preload-380460" hosting pod "kube-scheduler-test-preload-380460" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:11.616614 26255 pod_ready.go:81] duration metric: took 398.885794ms waiting for pod "kube-scheduler-test-preload-380460" in "kube-system" namespace to be "Ready" ...
E0315 20:44:11.616622 26255 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-380460" hosting pod "kube-scheduler-test-preload-380460" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-380460" has status "Ready":"False"
I0315 20:44:11.616630 26255 pod_ready.go:38] duration metric: took 948.986588ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 20:44:11.616644 26255 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0315 20:44:11.627066 26255 ops.go:34] apiserver oom_adj: -16
I0315 20:44:11.627087 26255 kubeadm.go:637] restartCluster took 34.700729771s
I0315 20:44:11.627095 26255 kubeadm.go:403] StartCluster complete in 34.742253612s
I0315 20:44:11.627120 26255 settings.go:142] acquiring lock: {Name:mkda40e792103d4faba2af295c7b07f0338baa6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 20:44:11.627211 26255 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16056-4029/kubeconfig
I0315 20:44:11.627860 26255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16056-4029/kubeconfig: {Name:mk00eb5b0ed86b7fe1dc3b258ff4a24f5f66bd05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 20:44:11.628090 26255 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0315 20:44:11.628207 26255 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0315 20:44:11.628287 26255 config.go:182] Loaded profile config "test-preload-380460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0315 20:44:11.628334 26255 addons.go:66] Setting storage-provisioner=true in profile "test-preload-380460"
I0315 20:44:11.628345 26255 addons.go:66] Setting default-storageclass=true in profile "test-preload-380460"
I0315 20:44:11.628366 26255 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-380460"
I0315 20:44:11.628367 26255 addons.go:228] Setting addon storage-provisioner=true in "test-preload-380460"
W0315 20:44:11.628492 26255 addons.go:237] addon storage-provisioner should already be in state true
I0315 20:44:11.628554 26255 host.go:66] Checking if "test-preload-380460" exists ...
I0315 20:44:11.628568 26255 kapi.go:59] client config for test-preload-380460: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/client.crt", KeyFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/client.key", CAFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29d6de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0315 20:44:11.628841 26255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0315 20:44:11.628887 26255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 20:44:11.628923 26255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0315 20:44:11.628958 26255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 20:44:11.631329 26255 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-380460" context rescaled to 1 replicas
I0315 20:44:11.631364 26255 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0315 20:44:11.634403 26255 out.go:177] * Verifying Kubernetes components...
I0315 20:44:11.635903 26255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 20:44:11.643496 26255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
I0315 20:44:11.644012 26255 main.go:141] libmachine: () Calling .GetVersion
I0315 20:44:11.644671 26255 main.go:141] libmachine: Using API Version 1
I0315 20:44:11.644697 26255 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 20:44:11.645159 26255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
I0315 20:44:11.645241 26255 main.go:141] libmachine: () Calling .GetMachineName
I0315 20:44:11.645476 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetState
I0315 20:44:11.645616 26255 main.go:141] libmachine: () Calling .GetVersion
I0315 20:44:11.646146 26255 main.go:141] libmachine: Using API Version 1
I0315 20:44:11.646169 26255 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 20:44:11.646488 26255 main.go:141] libmachine: () Calling .GetMachineName
I0315 20:44:11.647167 26255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0315 20:44:11.647214 26255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 20:44:11.647922 26255 kapi.go:59] client config for test-preload-380460: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/client.crt", KeyFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/profiles/test-preload-380460/client.key", CAFile:"/home/jenkins/minikube-integration/16056-4029/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29d6de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0315 20:44:11.658230 26255 addons.go:228] Setting addon default-storageclass=true in "test-preload-380460"
W0315 20:44:11.658246 26255 addons.go:237] addon default-storageclass should already be in state true
I0315 20:44:11.658266 26255 host.go:66] Checking if "test-preload-380460" exists ...
I0315 20:44:11.658531 26255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0315 20:44:11.658572 26255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 20:44:11.664690 26255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
I0315 20:44:11.665083 26255 main.go:141] libmachine: () Calling .GetVersion
I0315 20:44:11.665534 26255 main.go:141] libmachine: Using API Version 1
I0315 20:44:11.665559 26255 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 20:44:11.665846 26255 main.go:141] libmachine: () Calling .GetMachineName
I0315 20:44:11.666062 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetState
I0315 20:44:11.667579 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:44:11.670048 26255 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0315 20:44:11.671678 26255 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0315 20:44:11.671699 26255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0315 20:44:11.671719 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:44:11.673560 26255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
I0315 20:44:11.673970 26255 main.go:141] libmachine: () Calling .GetVersion
I0315 20:44:11.674469 26255 main.go:141] libmachine: Using API Version 1
I0315 20:44:11.674492 26255 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 20:44:11.674734 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:44:11.674851 26255 main.go:141] libmachine: () Calling .GetMachineName
I0315 20:44:11.675167 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:44:11.675225 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:44:11.675364 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:44:11.675405 26255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0315 20:44:11.675453 26255 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 20:44:11.675571 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:44:11.675746 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:44:11.675918 26255 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/machines/test-preload-380460/id_rsa Username:docker}
I0315 20:44:11.689845 26255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
I0315 20:44:11.690224 26255 main.go:141] libmachine: () Calling .GetVersion
I0315 20:44:11.690791 26255 main.go:141] libmachine: Using API Version 1
I0315 20:44:11.690819 26255 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 20:44:11.691130 26255 main.go:141] libmachine: () Calling .GetMachineName
I0315 20:44:11.691341 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetState
I0315 20:44:11.692886 26255 main.go:141] libmachine: (test-preload-380460) Calling .DriverName
I0315 20:44:11.693136 26255 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
I0315 20:44:11.693151 26255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0315 20:44:11.693170 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHHostname
I0315 20:44:11.695838 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:44:11.696227 26255 main.go:141] libmachine: (test-preload-380460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:16:f4", ip: ""} in network mk-test-preload-380460: {Iface:virbr1 ExpiryTime:2023-03-15 21:43:14 +0000 UTC Type:0 Mac:52:54:00:c8:16:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-380460 Clientid:01:52:54:00:c8:16:f4}
I0315 20:44:11.696257 26255 main.go:141] libmachine: (test-preload-380460) DBG | domain test-preload-380460 has defined IP address 192.168.39.81 and MAC address 52:54:00:c8:16:f4 in network mk-test-preload-380460
I0315 20:44:11.696424 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHPort
I0315 20:44:11.696643 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHKeyPath
I0315 20:44:11.696809 26255 main.go:141] libmachine: (test-preload-380460) Calling .GetSSHUsername
I0315 20:44:11.696962 26255 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16056-4029/.minikube/machines/test-preload-380460/id_rsa Username:docker}
I0315 20:44:11.824232 26255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0315 20:44:11.846205 26255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0315 20:44:11.910249 26255 node_ready.go:35] waiting up to 6m0s for node "test-preload-380460" to be "Ready" ...
I0315 20:44:11.910478 26255 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0315 20:44:12.690833 26255 main.go:141] libmachine: Making call to close driver server
I0315 20:44:12.690863 26255 main.go:141] libmachine: (test-preload-380460) Calling .Close
I0315 20:44:12.691101 26255 main.go:141] libmachine: Successfully made call to close driver server
I0315 20:44:12.691117 26255 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 20:44:12.691128 26255 main.go:141] libmachine: Making call to close driver server
I0315 20:44:12.691138 26255 main.go:141] libmachine: (test-preload-380460) Calling .Close
I0315 20:44:12.691158 26255 main.go:141] libmachine: (test-preload-380460) DBG | Closing plugin on server side
I0315 20:44:12.691358 26255 main.go:141] libmachine: Successfully made call to close driver server
I0315 20:44:12.691381 26255 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 20:44:12.691390 26255 main.go:141] libmachine: (test-preload-380460) DBG | Closing plugin on server side
I0315 20:44:12.691395 26255 main.go:141] libmachine: Making call to close driver server
I0315 20:44:12.691490 26255 main.go:141] libmachine: (test-preload-380460) Calling .Close
I0315 20:44:12.691707 26255 main.go:141] libmachine: Successfully made call to close driver server
I0315 20:44:12.691720 26255 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 20:44:12.692089 26255 main.go:141] libmachine: Making call to close driver server
I0315 20:44:12.692107 26255 main.go:141] libmachine: (test-preload-380460) Calling .Close
I0315 20:44:12.692343 26255 main.go:141] libmachine: Successfully made call to close driver server
I0315 20:44:12.692359 26255 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 20:44:12.692377 26255 main.go:141] libmachine: Making call to close driver server
I0315 20:44:12.692386 26255 main.go:141] libmachine: (test-preload-380460) Calling .Close
I0315 20:44:12.692600 26255 main.go:141] libmachine: Successfully made call to close driver server
I0315 20:44:12.692611 26255 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 20:44:12.694803 26255 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0315 20:44:12.696445 26255 addons.go:499] enable addons completed in 1.068245265s: enabled=[default-storageclass storage-provisioner]
I0315 20:44:13.917213 26255 node_ready.go:58] node "test-preload-380460" has status "Ready":"False"
I0315 20:44:16.417224 26255 node_ready.go:58] node "test-preload-380460" has status "Ready":"False"
I0315 20:44:18.419829 26255 node_ready.go:58] node "test-preload-380460" has status "Ready":"False"
I0315 20:44:19.416518 26255 node_ready.go:49] node "test-preload-380460" has status "Ready":"True"
I0315 20:44:19.416546 26255 node_ready.go:38] duration metric: took 7.506259439s waiting for node "test-preload-380460" to be "Ready" ...
I0315 20:44:19.416554 26255 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 20:44:19.421381 26255 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace to be "Ready" ...
I0315 20:44:21.435397 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:23.931951 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:25.936807 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:28.433956 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:30.933477 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:32.935676 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:35.432224 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:37.433506 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:39.933506 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:42.432580 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:44.933211 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:47.434096 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:49.932385 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:52.432981 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:54.932107 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:56.933786 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:44:59.433903 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:45:01.932524 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:45:04.432448 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:45:06.433040 26255 pod_ready.go:102] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"False"
I0315 20:45:07.433215 26255 pod_ready.go:92] pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace has status "Ready":"True"
I0315 20:45:07.433247 26255 pod_ready.go:81] duration metric: took 48.011842127s waiting for pod "coredns-6d4b75cb6d-drm2z" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.433260 26255 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.437586 26255 pod_ready.go:92] pod "etcd-test-preload-380460" in "kube-system" namespace has status "Ready":"True"
I0315 20:45:07.437602 26255 pod_ready.go:81] duration metric: took 4.334655ms waiting for pod "etcd-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.437610 26255 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.441923 26255 pod_ready.go:92] pod "kube-apiserver-test-preload-380460" in "kube-system" namespace has status "Ready":"True"
I0315 20:45:07.441940 26255 pod_ready.go:81] duration metric: took 4.323551ms waiting for pod "kube-apiserver-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.441952 26255 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.446152 26255 pod_ready.go:92] pod "kube-controller-manager-test-preload-380460" in "kube-system" namespace has status "Ready":"True"
I0315 20:45:07.446165 26255 pod_ready.go:81] duration metric: took 4.206212ms waiting for pod "kube-controller-manager-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.446172 26255 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6xvbn" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.450452 26255 pod_ready.go:92] pod "kube-proxy-6xvbn" in "kube-system" namespace has status "Ready":"True"
I0315 20:45:07.450466 26255 pod_ready.go:81] duration metric: took 4.289371ms waiting for pod "kube-proxy-6xvbn" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.450473 26255 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.830289 26255 pod_ready.go:92] pod "kube-scheduler-test-preload-380460" in "kube-system" namespace has status "Ready":"True"
I0315 20:45:07.830316 26255 pod_ready.go:81] duration metric: took 379.835617ms waiting for pod "kube-scheduler-test-preload-380460" in "kube-system" namespace to be "Ready" ...
I0315 20:45:07.830332 26255 pod_ready.go:38] duration metric: took 48.413767915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 20:45:07.830354 26255 api_server.go:51] waiting for apiserver process to appear ...
I0315 20:45:07.830404 26255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 20:45:07.845223 26255 api_server.go:71] duration metric: took 56.213831575s to wait for apiserver process to appear ...
I0315 20:45:07.845245 26255 api_server.go:87] waiting for apiserver healthz status ...
I0315 20:45:07.845262 26255 api_server.go:252] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
I0315 20:45:07.850647 26255 api_server.go:278] https://192.168.39.81:8443/healthz returned 200:
ok
I0315 20:45:07.851424 26255 api_server.go:140] control plane version: v1.24.4
I0315 20:45:07.851450 26255 api_server.go:130] duration metric: took 6.1966ms to wait for apiserver health ...
I0315 20:45:07.851460 26255 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 20:45:08.034365 26255 system_pods.go:59] 7 kube-system pods found
I0315 20:45:08.034398 26255 system_pods.go:61] "coredns-6d4b75cb6d-drm2z" [9628590f-f582-47c4-a991-245405b5b610] Running
I0315 20:45:08.034406 26255 system_pods.go:61] "etcd-test-preload-380460" [7042ec67-2d5c-4ffd-b161-00e1788b9251] Running
I0315 20:45:08.034413 26255 system_pods.go:61] "kube-apiserver-test-preload-380460" [e20f9aa4-12fc-4ddd-bfd5-500f660d406b] Running
I0315 20:45:08.034420 26255 system_pods.go:61] "kube-controller-manager-test-preload-380460" [379cdcc3-36f6-4394-b3f2-8d205567d35c] Running
I0315 20:45:08.034425 26255 system_pods.go:61] "kube-proxy-6xvbn" [71ce35e9-9989-44ac-a354-77945d47533f] Running
I0315 20:45:08.034432 26255 system_pods.go:61] "kube-scheduler-test-preload-380460" [442a9383-ebcb-4dd4-a9f1-eb49b7a7267a] Running
I0315 20:45:08.034441 26255 system_pods.go:61] "storage-provisioner" [4657c086-4b92-47a0-9752-8fe4a418546e] Running
I0315 20:45:08.034455 26255 system_pods.go:74] duration metric: took 182.988774ms to wait for pod list to return data ...
I0315 20:45:08.034464 26255 default_sa.go:34] waiting for default service account to be created ...
I0315 20:45:08.229993 26255 default_sa.go:45] found service account: "default"
I0315 20:45:08.230014 26255 default_sa.go:55] duration metric: took 195.544ms for default service account to be created ...
I0315 20:45:08.230021 26255 system_pods.go:116] waiting for k8s-apps to be running ...
I0315 20:45:08.433187 26255 system_pods.go:86] 7 kube-system pods found
I0315 20:45:08.433215 26255 system_pods.go:89] "coredns-6d4b75cb6d-drm2z" [9628590f-f582-47c4-a991-245405b5b610] Running
I0315 20:45:08.433220 26255 system_pods.go:89] "etcd-test-preload-380460" [7042ec67-2d5c-4ffd-b161-00e1788b9251] Running
I0315 20:45:08.433225 26255 system_pods.go:89] "kube-apiserver-test-preload-380460" [e20f9aa4-12fc-4ddd-bfd5-500f660d406b] Running
I0315 20:45:08.433229 26255 system_pods.go:89] "kube-controller-manager-test-preload-380460" [379cdcc3-36f6-4394-b3f2-8d205567d35c] Running
I0315 20:45:08.433233 26255 system_pods.go:89] "kube-proxy-6xvbn" [71ce35e9-9989-44ac-a354-77945d47533f] Running
I0315 20:45:08.433238 26255 system_pods.go:89] "kube-scheduler-test-preload-380460" [442a9383-ebcb-4dd4-a9f1-eb49b7a7267a] Running
I0315 20:45:08.433243 26255 system_pods.go:89] "storage-provisioner" [4657c086-4b92-47a0-9752-8fe4a418546e] Running
I0315 20:45:08.433251 26255 system_pods.go:126] duration metric: took 203.224804ms to wait for k8s-apps to be running ...
I0315 20:45:08.433262 26255 system_svc.go:44] waiting for kubelet service to be running ....
I0315 20:45:08.433313 26255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 20:45:08.448720 26255 system_svc.go:56] duration metric: took 15.436674ms WaitForService to wait for kubelet.
I0315 20:45:08.448746 26255 kubeadm.go:578] duration metric: took 56.817358002s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0315 20:45:08.448764 26255 node_conditions.go:102] verifying NodePressure condition ...
I0315 20:45:08.630427 26255 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0315 20:45:08.630449 26255 node_conditions.go:123] node cpu capacity is 2
I0315 20:45:08.630461 26255 node_conditions.go:105] duration metric: took 181.692625ms to run NodePressure ...
I0315 20:45:08.630471 26255 start.go:228] waiting for startup goroutines ...
I0315 20:45:08.630477 26255 start.go:233] waiting for cluster config update ...
I0315 20:45:08.630492 26255 start.go:242] writing updated cluster config ...
I0315 20:45:08.630784 26255 ssh_runner.go:195] Run: rm -f paused
I0315 20:45:08.680463 26255 start.go:555] kubectl: 1.26.2, cluster: 1.24.4 (minor skew: 2)
I0315 20:45:08.682838 26255 out.go:177]
W0315 20:45:08.684625 26255 out.go:239] ! /usr/local/bin/kubectl is version 1.26.2, which may have incompatibilities with Kubernetes 1.24.4.
I0315 20:45:08.686419 26255 out.go:177] - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
I0315 20:45:08.688187 26255 out.go:177] * Done! kubectl is now configured to use "test-preload-380460" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
980cc5472a043 7a53d1e08ef58 22 seconds ago Running kube-proxy 1 e563fc899dd7a
31fa481435c65 6e38f40d628db 28 seconds ago Running storage-provisioner 2 4ec870898897b
78e4bf48050fd a4ca41631cc7a 36 seconds ago Running coredns 1 0684c7388f597
2450b464c2c28 1f99cb6da9a82 47 seconds ago Running kube-controller-manager 1 a57d8a0ff1e23
64103aeb40eeb 6e38f40d628db 58 seconds ago Exited storage-provisioner 1 4ec870898897b
b09a9af32012f aebe758cef4cd About a minute ago Running etcd 1 fda7cba6c6f4b
8eb3c62fceaff 6cab9d1bed1be About a minute ago Running kube-apiserver 1 c8247117a94b0
1a9969c553620 03fa22539fc1c About a minute ago Running kube-scheduler 1 3968d41265596
*
* ==> containerd <==
* -- Journal begins at Wed 2023-03-15 20:43:14 UTC, ends at Wed 2023-03-15 20:45:09 UTC. --
Mar 15 20:44:32 test-preload-380460 containerd[629]: time="2023-03-15T20:44:32.876142921Z" level=error msg="CreateContainer within sandbox \"0684c7388f597059295d32e99e78a217751915a2ce1f40c160d96f6b02c916c7\" for &ContainerMetadata{Name:coredns,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-351386384 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists"
Mar 15 20:44:33 test-preload-380460 containerd[629]: time="2023-03-15T20:44:33.511627230Z" level=info msg="CreateContainer within sandbox \"0684c7388f597059295d32e99e78a217751915a2ce1f40c160d96f6b02c916c7\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Mar 15 20:44:33 test-preload-380460 containerd[629]: time="2023-03-15T20:44:33.549391277Z" level=info msg="CreateContainer within sandbox \"0684c7388f597059295d32e99e78a217751915a2ce1f40c160d96f6b02c916c7\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"78e4bf48050fd814b61cc44d1a7bd3c119be3a7d53adab03a0ca6d1126eefa66\""
Mar 15 20:44:33 test-preload-380460 containerd[629]: time="2023-03-15T20:44:33.551462635Z" level=info msg="StartContainer for \"78e4bf48050fd814b61cc44d1a7bd3c119be3a7d53adab03a0ca6d1126eefa66\""
Mar 15 20:44:33 test-preload-380460 containerd[629]: time="2023-03-15T20:44:33.627521950Z" level=info msg="StartContainer for \"78e4bf48050fd814b61cc44d1a7bd3c119be3a7d53adab03a0ca6d1126eefa66\" returns successfully"
Mar 15 20:44:35 test-preload-380460 containerd[629]: time="2023-03-15T20:44:35.386498428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xvbn,Uid:71ce35e9-9989-44ac-a354-77945d47533f,Namespace:kube-system,Attempt:0,}"
Mar 15 20:44:35 test-preload-380460 containerd[629]: time="2023-03-15T20:44:35.399253342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xvbn,Uid:71ce35e9-9989-44ac-a354-77945d47533f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1770097048 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists"
Mar 15 20:44:40 test-preload-380460 containerd[629]: time="2023-03-15T20:44:40.965359410Z" level=info msg="shim disconnected" id=64103aeb40eebdd5a4937d0ecf6b99be52be94564348337b6dc491e84742aed4
Mar 15 20:44:40 test-preload-380460 containerd[629]: time="2023-03-15T20:44:40.965813824Z" level=warning msg="cleaning up after shim disconnected" id=64103aeb40eebdd5a4937d0ecf6b99be52be94564348337b6dc491e84742aed4 namespace=k8s.io
Mar 15 20:44:40 test-preload-380460 containerd[629]: time="2023-03-15T20:44:40.965833145Z" level=info msg="cleaning up dead shim"
Mar 15 20:44:40 test-preload-380460 containerd[629]: time="2023-03-15T20:44:40.980227031Z" level=warning msg="cleanup warnings time=\"2023-03-15T20:44:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1496 runtime=io.containerd.runc.v2\n"
Mar 15 20:44:41 test-preload-380460 containerd[629]: time="2023-03-15T20:44:41.535670850Z" level=info msg="CreateContainer within sandbox \"4ec870898897b5565b7b4960f9f748acbc234a3206582268295cc042962643eb\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
Mar 15 20:44:41 test-preload-380460 containerd[629]: time="2023-03-15T20:44:41.565686954Z" level=info msg="CreateContainer within sandbox \"4ec870898897b5565b7b4960f9f748acbc234a3206582268295cc042962643eb\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"31fa481435c65d371095154d143b84fc1df425537a7a5cab9501d64431c91561\""
Mar 15 20:44:41 test-preload-380460 containerd[629]: time="2023-03-15T20:44:41.566866315Z" level=info msg="StartContainer for \"31fa481435c65d371095154d143b84fc1df425537a7a5cab9501d64431c91561\""
Mar 15 20:44:41 test-preload-380460 containerd[629]: time="2023-03-15T20:44:41.652455774Z" level=info msg="StartContainer for \"31fa481435c65d371095154d143b84fc1df425537a7a5cab9501d64431c91561\" returns successfully"
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.387480738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xvbn,Uid:71ce35e9-9989-44ac-a354-77945d47533f,Namespace:kube-system,Attempt:0,}"
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.422549271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.422890618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.423070665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.423522957Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e563fc899dd7a2f19cef0f0659b7b46ed6038250b25e62e195475e51550214e6 pid=1551 runtime=io.containerd.runc.v2
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.491712499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xvbn,Uid:71ce35e9-9989-44ac-a354-77945d47533f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e563fc899dd7a2f19cef0f0659b7b46ed6038250b25e62e195475e51550214e6\""
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.501376990Z" level=info msg="CreateContainer within sandbox \"e563fc899dd7a2f19cef0f0659b7b46ed6038250b25e62e195475e51550214e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.533336631Z" level=info msg="CreateContainer within sandbox \"e563fc899dd7a2f19cef0f0659b7b46ed6038250b25e62e195475e51550214e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"980cc5472a04389c86b4d551443a119ea7981be5f90efa32c7aa61850ab28c5c\""
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.537129619Z" level=info msg="StartContainer for \"980cc5472a04389c86b4d551443a119ea7981be5f90efa32c7aa61850ab28c5c\""
Mar 15 20:44:47 test-preload-380460 containerd[629]: time="2023-03-15T20:44:47.616540304Z" level=info msg="StartContainer for \"980cc5472a04389c86b4d551443a119ea7981be5f90efa32c7aa61850ab28c5c\" returns successfully"
*
* ==> coredns [78e4bf48050fd814b61cc44d1a7bd3c119be3a7d53adab03a0ca6d1126eefa66] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] 127.0.0.1:58101 - 64439 "HINFO IN 4601504383526202625.6028887042925451303. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015042115s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
*
* ==> describe nodes <==
* Name: test-preload-380460
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-380460
kubernetes.io/os=linux
minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e
minikube.k8s.io/name=test-preload-380460
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_15T20_40_52_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 15 Mar 2023 20:40:48 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-380460
AcquireTime: <unset>
RenewTime: Wed, 15 Mar 2023 20:45:09 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 15 Mar 2023 20:44:19 +0000 Wed, 15 Mar 2023 20:40:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 15 Mar 2023 20:44:19 +0000 Wed, 15 Mar 2023 20:40:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 15 Mar 2023 20:44:19 +0000 Wed, 15 Mar 2023 20:40:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 15 Mar 2023 20:44:19 +0000 Wed, 15 Mar 2023 20:44:19 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.81
Hostname: test-preload-380460
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 45733bfcb3764e4b923e935a935564ec
System UUID: 45733bfc-b376-4e4b-923e-935a935564ec
Boot ID: 9093e4dd-4dc9-4004-8be2-91591cc8749e
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.19
Kubelet Version: v1.24.4
Kube-Proxy Version: v1.24.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-drm2z 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 4m5s
kube-system etcd-test-preload-380460 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 4m17s
kube-system kube-apiserver-test-preload-380460 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m17s
kube-system kube-controller-manager-test-preload-380460 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m17s
kube-system kube-proxy-6xvbn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
kube-system kube-scheduler-test-preload-380460 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m17s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m4s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 22s kube-proxy
Normal Starting 4m4s kube-proxy
Normal Starting 4m17s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m17s kubelet Node test-preload-380460 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m17s kubelet Node test-preload-380460 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m17s kubelet Node test-preload-380460 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m17s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 4m7s kubelet Node test-preload-380460 status is now: NodeReady
Normal RegisteredNode 4m5s node-controller Node test-preload-380460 event: Registered Node test-preload-380460 in Controller
Normal Starting 81s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 81s (x8 over 81s) kubelet Node test-preload-380460 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 81s (x8 over 81s) kubelet Node test-preload-380460 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 81s (x7 over 81s) kubelet Node test-preload-380460 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 81s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 35s node-controller Node test-preload-380460 event: Registered Node test-preload-380460 in Controller
*
* ==> dmesg <==
* [Mar15 20:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.070399] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.954031] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.263103] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.145878] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.399872] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
[ +15.579624] systemd-fstab-generator[528]: Ignoring "noauto" for root device
[ +2.847619] systemd-fstab-generator[558]: Ignoring "noauto" for root device
[ +0.109932] systemd-fstab-generator[569]: Ignoring "noauto" for root device
[ +0.137714] systemd-fstab-generator[582]: Ignoring "noauto" for root device
[ +0.109419] systemd-fstab-generator[593]: Ignoring "noauto" for root device
[ +0.240588] systemd-fstab-generator[620]: Ignoring "noauto" for root device
[ +12.803668] systemd-fstab-generator[814]: Ignoring "noauto" for root device
[Mar15 20:44] kauditd_printk_skb: 7 callbacks suppressed
[Mar15 20:45] kauditd_printk_skb: 8 callbacks suppressed
*
* ==> etcd [b09a9af32012f663d9529ebb091a60279bb26944ba62cdecc442dbab61b5052a] <==
* {"level":"info","ts":"2023-03-15T20:44:04.856Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"81f5d9acb096f107","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-03-15T20:44:04.857Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-03-15T20:44:04.857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 switched to configuration voters=(9364630335907098887)"}
{"level":"info","ts":"2023-03-15T20:44:04.858Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","added-peer-id":"81f5d9acb096f107","added-peer-peer-urls":["https://192.168.39.81:2380"]}
{"level":"info","ts":"2023-03-15T20:44:04.858Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-15T20:44:04.858Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-15T20:44:04.859Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-15T20:44:04.860Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"81f5d9acb096f107","initial-advertise-peer-urls":["https://192.168.39.81:2380"],"listen-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.81:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-15T20:44:04.860Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-15T20:44:04.860Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.81:2380"}
{"level":"info","ts":"2023-03-15T20:44:04.860Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.81:2380"}
{"level":"info","ts":"2023-03-15T20:44:06.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 is starting a new election at term 2"}
{"level":"info","ts":"2023-03-15T20:44:06.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became pre-candidate at term 2"}
{"level":"info","ts":"2023-03-15T20:44:06.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgPreVoteResp from 81f5d9acb096f107 at term 2"}
{"level":"info","ts":"2023-03-15T20:44:06.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became candidate at term 3"}
{"level":"info","ts":"2023-03-15T20:44:06.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgVoteResp from 81f5d9acb096f107 at term 3"}
{"level":"info","ts":"2023-03-15T20:44:06.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became leader at term 3"}
{"level":"info","ts":"2023-03-15T20:44:06.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81f5d9acb096f107 elected leader 81f5d9acb096f107 at term 3"}
{"level":"info","ts":"2023-03-15T20:44:06.338Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"81f5d9acb096f107","local-member-attributes":"{Name:test-preload-380460 ClientURLs:[https://192.168.39.81:2379]}","request-path":"/0/members/81f5d9acb096f107/attributes","cluster-id":"a77bf2d9a9fbb59e","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-15T20:44:06.338Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-15T20:44:06.339Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-15T20:44:06.340Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.81:2379"}
{"level":"info","ts":"2023-03-15T20:44:06.340Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-15T20:44:06.340Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-15T20:44:06.341Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 20:45:09 up 2 min, 0 users, load average: 1.41, 0.55, 0.20
Linux test-preload-380460 5.10.57 #1 SMP Tue Mar 7 21:42:52 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [8eb3c62fceaff25734115a11e393a21fe9bfee87ac2c31a96e82ac728cc6c506] <==
* I0315 20:44:08.780384 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0315 20:44:08.738942 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0315 20:44:08.780940 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0315 20:44:08.738952 1 controller.go:80] Starting OpenAPI V3 AggregationController
I0315 20:44:08.758140 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0315 20:44:08.781245 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0315 20:44:08.846402 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0315 20:44:08.847185 1 apf_controller.go:322] Running API Priority and Fairness config worker
E0315 20:44:08.847693 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0315 20:44:08.873148 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0315 20:44:08.883012 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0315 20:44:08.883144 1 cache.go:39] Caches are synced for autoregister controller
I0315 20:44:08.883403 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0315 20:44:08.924164 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0315 20:44:08.934669 1 shared_informer.go:262] Caches are synced for node_authorizer
I0315 20:44:09.404576 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0315 20:44:09.728005 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0315 20:44:10.534196 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0315 20:44:10.552147 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0315 20:44:10.609993 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0315 20:44:10.637866 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0315 20:44:10.644422 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0315 20:44:34.666988 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0315 20:44:34.699418 1 controller.go:611] quota admission added evaluator for: endpoints
I0315 20:44:47.813557 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-controller-manager [2450b464c2c28e1f1b9cc64cc91040d5188a099a7b74d09e62d75d0e985df0b6] <==
* W0315 20:44:34.641165 1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-380460. Assuming now as a timestamp.
I0315 20:44:34.641437 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0315 20:44:34.641636 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0315 20:44:34.643555 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0315 20:44:34.644150 1 shared_informer.go:262] Caches are synced for daemon sets
I0315 20:44:34.648937 1 shared_informer.go:262] Caches are synced for job
I0315 20:44:34.652456 1 shared_informer.go:262] Caches are synced for PVC protection
I0315 20:44:34.652721 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0315 20:44:34.655256 1 shared_informer.go:262] Caches are synced for deployment
I0315 20:44:34.656354 1 shared_informer.go:262] Caches are synced for crt configmap
I0315 20:44:34.656527 1 shared_informer.go:262] Caches are synced for endpoint
I0315 20:44:34.659255 1 shared_informer.go:262] Caches are synced for disruption
I0315 20:44:34.659290 1 disruption.go:371] Sending events to api server.
I0315 20:44:34.663886 1 shared_informer.go:262] Caches are synced for stateful set
I0315 20:44:34.713421 1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
I0315 20:44:34.784611 1 shared_informer.go:262] Caches are synced for resource quota
I0315 20:44:34.840479 1 shared_informer.go:262] Caches are synced for PV protection
I0315 20:44:34.846845 1 shared_informer.go:262] Caches are synced for persistent volume
I0315 20:44:34.865296 1 shared_informer.go:262] Caches are synced for resource quota
I0315 20:44:34.865353 1 shared_informer.go:262] Caches are synced for cronjob
I0315 20:44:34.894890 1 shared_informer.go:262] Caches are synced for attach detach
I0315 20:44:34.898186 1 shared_informer.go:262] Caches are synced for expand
I0315 20:44:35.314408 1 shared_informer.go:262] Caches are synced for garbage collector
I0315 20:44:35.314474 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0315 20:44:35.320463 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-proxy [980cc5472a04389c86b4d551443a119ea7981be5f90efa32c7aa61850ab28c5c] <==
* I0315 20:44:47.752955 1 node.go:163] Successfully retrieved node IP: 192.168.39.81
I0315 20:44:47.753484 1 server_others.go:138] "Detected node IP" address="192.168.39.81"
I0315 20:44:47.753904 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0315 20:44:47.799606 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0315 20:44:47.799658 1 server_others.go:206] "Using iptables Proxier"
I0315 20:44:47.800117 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0315 20:44:47.801117 1 server.go:661] "Version info" version="v1.24.4"
I0315 20:44:47.801160 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 20:44:47.803915 1 config.go:317] "Starting service config controller"
I0315 20:44:47.805520 1 shared_informer.go:255] Waiting for caches to sync for service config
I0315 20:44:47.806495 1 config.go:226] "Starting endpoint slice config controller"
I0315 20:44:47.808591 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0315 20:44:47.804318 1 config.go:444] "Starting node config controller"
I0315 20:44:47.812366 1 shared_informer.go:255] Waiting for caches to sync for node config
I0315 20:44:47.812507 1 shared_informer.go:262] Caches are synced for node config
I0315 20:44:47.905975 1 shared_informer.go:262] Caches are synced for service config
I0315 20:44:47.909226 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [1a9969c553620fb99616e73ed0437249a479e9e984fc58830c424b05e831d443] <==
* W0315 20:43:59.492377 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.39.81:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
E0315 20:43:59.492446 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.81:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
W0315 20:43:59.494711 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.39.81:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
E0315 20:43:59.494864 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.81:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
W0315 20:43:59.502647 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.81:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
E0315 20:43:59.502721 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.81:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
W0315 20:43:59.798424 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.81:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
E0315 20:43:59.798505 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.81:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
W0315 20:44:08.795079 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0315 20:44:08.796690 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0315 20:44:08.797175 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0315 20:44:08.797422 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0315 20:44:08.797589 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0315 20:44:08.797844 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0315 20:44:08.798093 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0315 20:44:08.798269 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0315 20:44:08.798547 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0315 20:44:08.798699 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0315 20:44:08.798874 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0315 20:44:08.799023 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0315 20:44:08.799080 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0315 20:44:08.799318 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0315 20:44:08.833147 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
E0315 20:44:08.833301 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
I0315 20:44:29.483723 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Wed 2023-03-15 20:43:14 UTC, ends at Wed 2023-03-15 20:45:10 UTC. --
Mar 15 20:44:10 test-preload-380460 kubelet[820]: E0315 20:44:10.217967 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-6xvbn_kube-system(71ce35e9-9989-44ac-a354-77945d47533f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-6xvbn_kube-system(71ce35e9-9989-44ac-a354-77945d47533f)\\\": rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-884248056 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/27: file exists\"" pod="kube-system/kube-proxy-6xvbn" podUID=71ce35e9-9989-44ac-a354-77945d47533f
Mar 15 20:44:10 test-preload-380460 kubelet[820]: E0315 20:44:10.971106 820 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Mar 15 20:44:10 test-preload-380460 kubelet[820]: E0315 20:44:10.971227 820 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9628590f-f582-47c4-a991-245405b5b610-config-volume podName:9628590f-f582-47c4-a991-245405b5b610 nodeName:}" failed. No retries permitted until 2023-03-15 20:44:12.971209596 +0000 UTC m=+24.851021049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9628590f-f582-47c4-a991-245405b5b610-config-volume") pod "coredns-6d4b75cb6d-drm2z" (UID: "9628590f-f582-47c4-a991-245405b5b610") : object "kube-system"/"coredns" not registered
Mar 15 20:44:11 test-preload-380460 kubelet[820]: E0315 20:44:11.385892 820 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-drm2z" podUID=9628590f-f582-47c4-a991-245405b5b610
Mar 15 20:44:12 test-preload-380460 kubelet[820]: E0315 20:44:12.987888 820 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Mar 15 20:44:12 test-preload-380460 kubelet[820]: E0315 20:44:12.988098 820 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9628590f-f582-47c4-a991-245405b5b610-config-volume podName:9628590f-f582-47c4-a991-245405b5b610 nodeName:}" failed. No retries permitted until 2023-03-15 20:44:16.98802832 +0000 UTC m=+28.867839764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9628590f-f582-47c4-a991-245405b5b610-config-volume") pod "coredns-6d4b75cb6d-drm2z" (UID: "9628590f-f582-47c4-a991-245405b5b610") : object "kube-system"/"coredns" not registered
Mar 15 20:44:17 test-preload-380460 kubelet[820]: E0315 20:44:17.308542 820 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-82440763 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28: file exists"
Mar 15 20:44:17 test-preload-380460 kubelet[820]: E0315 20:44:17.309006 820 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-82440763 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28: file exists" pod="kube-system/coredns-6d4b75cb6d-drm2z"
Mar 15 20:44:17 test-preload-380460 kubelet[820]: E0315 20:44:17.309074 820 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-82440763 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28: file exists" pod="kube-system/coredns-6d4b75cb6d-drm2z"
Mar 15 20:44:17 test-preload-380460 kubelet[820]: E0315 20:44:17.309158 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-drm2z_kube-system(9628590f-f582-47c4-a991-245405b5b610)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-drm2z_kube-system(9628590f-f582-47c4-a991-245405b5b610)\\\": rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-82440763 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28: file exists\"" pod="kube-system/coredns-6d4b75cb6d-drm2z" podUID=9628590f-f582-47c4-a991-245405b5b610
Mar 15 20:44:21 test-preload-380460 kubelet[820]: E0315 20:44:21.770101 820 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-865900997 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists" podSandboxID="a57d8a0ff1e23ff29613dd83371657395106a04f130f94f9407e634502239560"
Mar 15 20:44:21 test-preload-380460 kubelet[820]: E0315 20:44:21.770238 820 kuberuntime_manager.go:905] container &Container{Name:kube-controller-manager,Image:k8s.gcr.io/kube-controller-manager:v1.24.4,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentia
ls=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:
[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},} start failed in pod kube-controller-manager-test-preload-380460_kube-system(ba5a7bfd5a46cf9f1fff858d31f743d2): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots
/new-865900997 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists
Mar 15 20:44:21 test-preload-380460 kubelet[820]: E0315 20:44:21.770301 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-865900997 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29: file exists\"" pod="kube-system/kube-controller-manager-test-preload-380460" podUID=ba5a7bfd5a46cf9f1fff858d31f743d2
Mar 15 20:44:24 test-preload-380460 kubelet[820]: E0315 20:44:24.401972 820 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2952657314 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/30: file exists"
Mar 15 20:44:24 test-preload-380460 kubelet[820]: E0315 20:44:24.403078 820 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2952657314 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/30: file exists" pod="kube-system/kube-proxy-6xvbn"
Mar 15 20:44:24 test-preload-380460 kubelet[820]: E0315 20:44:24.403150 820 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2952657314 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/30: file exists" pod="kube-system/kube-proxy-6xvbn"
Mar 15 20:44:24 test-preload-380460 kubelet[820]: E0315 20:44:24.403271 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-6xvbn_kube-system(71ce35e9-9989-44ac-a354-77945d47533f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-6xvbn_kube-system(71ce35e9-9989-44ac-a354-77945d47533f)\\\": rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2952657314 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/30: file exists\"" pod="kube-system/kube-proxy-6xvbn" podUID=71ce35e9-9989-44ac-a354-77945d47533f
Mar 15 20:44:32 test-preload-380460 kubelet[820]: E0315 20:44:32.876548 820 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-351386384 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists" podSandboxID="0684c7388f597059295d32e99e78a217751915a2ce1f40c160d96f6b02c916c7"
Mar 15 20:44:32 test-preload-380460 kubelet[820]: E0315 20:44:32.877009 820 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z7f7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-drm2z_kube-system(9628590f-f582-47c4-a991-245405b5b610): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-351386384 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists
Mar 15 20:44:32 test-preload-380460 kubelet[820]: E0315 20:44:32.877304 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-351386384 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists\"" pod="kube-system/coredns-6d4b75cb6d-drm2z" podUID=9628590f-f582-47c4-a991-245405b5b610
Mar 15 20:44:35 test-preload-380460 kubelet[820]: E0315 20:44:35.399836 820 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1770097048 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists"
Mar 15 20:44:35 test-preload-380460 kubelet[820]: E0315 20:44:35.399914 820 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1770097048 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists" pod="kube-system/kube-proxy-6xvbn"
Mar 15 20:44:35 test-preload-380460 kubelet[820]: E0315 20:44:35.399936 820 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1770097048 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists" pod="kube-system/kube-proxy-6xvbn"
Mar 15 20:44:35 test-preload-380460 kubelet[820]: E0315 20:44:35.399981 820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-6xvbn_kube-system(71ce35e9-9989-44ac-a354-77945d47533f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-6xvbn_kube-system(71ce35e9-9989-44ac-a354-77945d47533f)\\\": rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1770097048 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists\"" pod="kube-system/kube-proxy-6xvbn" podUID=71ce35e9-9989-44ac-a354-77945d47533f
Mar 15 20:44:41 test-preload-380460 kubelet[820]: I0315 20:44:41.532167 820 scope.go:110] "RemoveContainer" containerID="64103aeb40eebdd5a4937d0ecf6b99be52be94564348337b6dc491e84742aed4"
*
* ==> storage-provisioner [31fa481435c65d371095154d143b84fc1df425537a7a5cab9501d64431c91561] <==
* I0315 20:44:41.667885 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
*
* ==> storage-provisioner [64103aeb40eebdd5a4937d0ecf6b99be52be94564348337b6dc491e84742aed4] <==
* I0315 20:44:10.911886 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0315 20:44:40.926667 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-380460 -n test-preload-380460
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-380460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-380460" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-380460
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-380460: (1.180072135s)
--- FAIL: TestPreload (324.16s)