=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-778713 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0321 22:31:15.067157 64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:31:38.898209 64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-778713 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (1m55.228324995s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-778713 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p test-preload-778713
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-778713: (1m31.921439992s)
preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-778713 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd
E0321 22:33:29.737393 64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-778713 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd: (3m1.820330015s)
preload_test.go:80: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-778713 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.4 6cab9d1bed1be 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.4 1f99cb6da9a82 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-scheduler v1.24.4 03fa22539fc1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-03-21 22:36:13.826252527 +0000 UTC m=+2806.272990007
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-778713 -n test-preload-778713
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-778713 logs -n 25
E0321 22:36:15.066229 64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-778713 logs -n 25: (1.136011872s)
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-508124 ssh -n | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
| | multinode-508124-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-508124 ssh -n multinode-508124 sudo cat | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
| | /home/docker/cp-test_multinode-508124-m03_multinode-508124.txt | | | | | |
| cp | multinode-508124 cp multinode-508124-m03:/home/docker/cp-test.txt | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
| | multinode-508124-m02:/home/docker/cp-test_multinode-508124-m03_multinode-508124-m02.txt | | | | | |
| ssh | multinode-508124 ssh -n | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
| | multinode-508124-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-508124 ssh -n multinode-508124-m02 sudo cat | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
| | /home/docker/cp-test_multinode-508124-m03_multinode-508124-m02.txt | | | | | |
| node | multinode-508124 node stop m03 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
| node | multinode-508124 node start | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:12 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-508124 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:12 UTC | |
| stop | -p multinode-508124 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:12 UTC | 21 Mar 23 22:15 UTC |
| start | -p multinode-508124 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:15 UTC | 21 Mar 23 22:21 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-508124 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:21 UTC | |
| node | multinode-508124 node delete | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:21 UTC | 21 Mar 23 22:21 UTC |
| | m03 | | | | | |
| stop | multinode-508124 stop | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:21 UTC | 21 Mar 23 22:24 UTC |
| start | -p multinode-508124 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:24 UTC | 21 Mar 23 22:28 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-508124 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:28 UTC | |
| start | -p multinode-508124-m02 | multinode-508124-m02 | jenkins | v1.29.0 | 21 Mar 23 22:28 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-508124-m03 | multinode-508124-m03 | jenkins | v1.29.0 | 21 Mar 23 22:28 UTC | 21 Mar 23 22:29 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-508124 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:29 UTC | |
| delete | -p multinode-508124-m03 | multinode-508124-m03 | jenkins | v1.29.0 | 21 Mar 23 22:29 UTC | 21 Mar 23 22:29 UTC |
| delete | -p multinode-508124 | multinode-508124 | jenkins | v1.29.0 | 21 Mar 23 22:29 UTC | 21 Mar 23 22:29 UTC |
| start | -p test-preload-778713 | test-preload-778713 | jenkins | v1.29.0 | 21 Mar 23 22:29 UTC | 21 Mar 23 22:31 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-778713 | test-preload-778713 | jenkins | v1.29.0 | 21 Mar 23 22:31 UTC | 21 Mar 23 22:31 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| stop | -p test-preload-778713 | test-preload-778713 | jenkins | v1.29.0 | 21 Mar 23 22:31 UTC | 21 Mar 23 22:33 UTC |
| start | -p test-preload-778713 | test-preload-778713 | jenkins | v1.29.0 | 21 Mar 23 22:33 UTC | 21 Mar 23 22:36 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p test-preload-778713 -- sudo | test-preload-778713 | jenkins | v1.29.0 | 21 Mar 23 22:36 UTC | 21 Mar 23 22:36 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/21 22:33:11
Running on machine: ubuntu-20-agent-12
Binary: Built with gc go1.20.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0321 22:33:11.803469 79998 out.go:296] Setting OutFile to fd 1 ...
I0321 22:33:11.803569 79998 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0321 22:33:11.803577 79998 out.go:309] Setting ErrFile to fd 2...
I0321 22:33:11.803582 79998 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0321 22:33:11.803677 79998 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
I0321 22:33:11.804199 79998 out.go:303] Setting JSON to false
I0321 22:33:11.805067 79998 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11742,"bootTime":1679426250,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0321 22:33:11.805117 79998 start.go:135] virtualization: kvm guest
I0321 22:33:11.807536 79998 out.go:177] * [test-preload-778713] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0321 22:33:11.809385 79998 notify.go:220] Checking for updates...
I0321 22:33:11.810889 79998 out.go:177] - MINIKUBE_LOCATION=16124
I0321 22:33:11.812910 79998 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0321 22:33:11.814316 79998 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
I0321 22:33:11.815681 79998 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
I0321 22:33:11.817037 79998 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0321 22:33:11.818354 79998 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0321 22:33:11.819917 79998 config.go:182] Loaded profile config "test-preload-778713": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0321 22:33:11.820268 79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0321 22:33:11.820310 79998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0321 22:33:11.833948 79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
I0321 22:33:11.834305 79998 main.go:141] libmachine: () Calling .GetVersion
I0321 22:33:11.834808 79998 main.go:141] libmachine: Using API Version 1
I0321 22:33:11.834831 79998 main.go:141] libmachine: () Calling .SetConfigRaw
I0321 22:33:11.835188 79998 main.go:141] libmachine: () Calling .GetMachineName
I0321 22:33:11.835366 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:11.837005 79998 out.go:177] * Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
I0321 22:33:11.838303 79998 driver.go:365] Setting default libvirt URI to qemu:///system
I0321 22:33:11.838721 79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0321 22:33:11.838758 79998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0321 22:33:11.851819 79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
I0321 22:33:11.852205 79998 main.go:141] libmachine: () Calling .GetVersion
I0321 22:33:11.852659 79998 main.go:141] libmachine: Using API Version 1
I0321 22:33:11.852723 79998 main.go:141] libmachine: () Calling .SetConfigRaw
I0321 22:33:11.852991 79998 main.go:141] libmachine: () Calling .GetMachineName
I0321 22:33:11.853170 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:11.884706 79998 out.go:177] * Using the kvm2 driver based on existing profile
I0321 22:33:11.885956 79998 start.go:295] selected driver: kvm2
I0321 22:33:11.885968 79998 start.go:856] validating driver "kvm2" against &{Name:test-preload-778713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-778713 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0321 22:33:11.886084 79998 start.go:867] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0321 22:33:11.886755 79998 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0321 22:33:11.886824 79998 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16124-57437/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0321 22:33:11.899407 79998 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0321 22:33:11.899696 79998 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0321 22:33:11.899733 79998 cni.go:84] Creating CNI manager for ""
I0321 22:33:11.899748 79998 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0321 22:33:11.899763 79998 start_flags.go:319] config:
{Name:test-preload-778713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-778713 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0321 22:33:11.899872 79998 iso.go:125] acquiring lock: {Name:mkfce26b31a4ea2eba60da091679606a7e7271e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0321 22:33:11.901583 79998 out.go:177] * Starting control plane node test-preload-778713 in cluster test-preload-778713
I0321 22:33:11.902824 79998 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0321 22:33:11.927873 79998 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0321 22:33:11.927894 79998 cache.go:57] Caching tarball of preloaded images
I0321 22:33:11.928002 79998 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0321 22:33:11.929648 79998 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0321 22:33:11.930959 79998 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0321 22:33:11.963831 79998 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0321 22:33:15.083975 79998 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0321 22:33:15.084057 79998 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0321 22:33:16.005314 79998 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0321 22:33:16.005469 79998 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/config.json ...
I0321 22:33:16.005676 79998 cache.go:193] Successfully downloaded all kic artifacts
I0321 22:33:16.005706 79998 start.go:364] acquiring machines lock for test-preload-778713: {Name:mkb5caebff1efd48c9f7f7696365f0c61c19b667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0321 22:33:16.005761 79998 start.go:368] acquired machines lock for "test-preload-778713" in 40.978µs
I0321 22:33:16.005776 79998 start.go:96] Skipping create...Using existing machine configuration
I0321 22:33:16.005781 79998 fix.go:55] fixHost starting:
I0321 22:33:16.006041 79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0321 22:33:16.006075 79998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0321 22:33:16.020069 79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
I0321 22:33:16.020497 79998 main.go:141] libmachine: () Calling .GetVersion
I0321 22:33:16.021044 79998 main.go:141] libmachine: Using API Version 1
I0321 22:33:16.021071 79998 main.go:141] libmachine: () Calling .SetConfigRaw
I0321 22:33:16.021386 79998 main.go:141] libmachine: () Calling .GetMachineName
I0321 22:33:16.021612 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:16.021777 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetState
I0321 22:33:16.023345 79998 fix.go:103] recreateIfNeeded on test-preload-778713: state=Stopped err=<nil>
I0321 22:33:16.023385 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
W0321 22:33:16.023567 79998 fix.go:129] unexpected machine state, will restart: <nil>
I0321 22:33:16.026198 79998 out.go:177] * Restarting existing kvm2 VM for "test-preload-778713" ...
I0321 22:33:16.027628 79998 main.go:141] libmachine: (test-preload-778713) Calling .Start
I0321 22:33:16.027789 79998 main.go:141] libmachine: (test-preload-778713) Ensuring networks are active...
I0321 22:33:16.028483 79998 main.go:141] libmachine: (test-preload-778713) Ensuring network default is active
I0321 22:33:16.028835 79998 main.go:141] libmachine: (test-preload-778713) Ensuring network mk-test-preload-778713 is active
I0321 22:33:16.029195 79998 main.go:141] libmachine: (test-preload-778713) Getting domain xml...
I0321 22:33:16.029810 79998 main.go:141] libmachine: (test-preload-778713) Creating domain...
I0321 22:33:17.222988 79998 main.go:141] libmachine: (test-preload-778713) Waiting to get IP...
I0321 22:33:17.223950 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:17.224306 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:17.224400 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:17.224308 80033 retry.go:31] will retry after 234.269246ms: waiting for machine to come up
I0321 22:33:17.459749 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:17.460228 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:17.460254 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:17.460171 80033 retry.go:31] will retry after 374.02864ms: waiting for machine to come up
I0321 22:33:17.835356 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:17.835739 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:17.835764 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:17.835683 80033 retry.go:31] will retry after 326.78501ms: waiting for machine to come up
I0321 22:33:18.164110 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:18.164534 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:18.164566 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:18.164461 80033 retry.go:31] will retry after 543.227464ms: waiting for machine to come up
I0321 22:33:18.709002 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:18.709469 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:18.709496 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:18.709422 80033 retry.go:31] will retry after 502.469144ms: waiting for machine to come up
I0321 22:33:19.213235 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:19.213697 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:19.213721 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:19.213647 80033 retry.go:31] will retry after 587.0711ms: waiting for machine to come up
I0321 22:33:19.802438 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:19.802937 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:19.802987 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:19.802864 80033 retry.go:31] will retry after 1.110796312s: waiting for machine to come up
I0321 22:33:20.915024 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:20.915380 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:20.915401 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:20.915329 80033 retry.go:31] will retry after 1.258745231s: waiting for machine to come up
I0321 22:33:22.175388 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:22.175735 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:22.175759 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:22.175708 80033 retry.go:31] will retry after 1.480442121s: waiting for machine to come up
I0321 22:33:23.658653 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:23.659084 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:23.659137 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:23.659083 80033 retry.go:31] will retry after 2.001321941s: waiting for machine to come up
I0321 22:33:25.663257 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:25.663728 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:25.663750 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:25.663669 80033 retry.go:31] will retry after 2.322790555s: waiting for machine to come up
I0321 22:33:27.988573 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:27.989018 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:27.989048 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:27.988959 80033 retry.go:31] will retry after 2.488215716s: waiting for machine to come up
I0321 22:33:30.479268 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:30.479623 79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
I0321 22:33:30.479649 79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:30.479566 80033 retry.go:31] will retry after 3.795193672s: waiting for machine to come up
I0321 22:33:34.278630 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.279107 79998 main.go:141] libmachine: (test-preload-778713) Found IP for machine: 192.168.39.129
I0321 22:33:34.279137 79998 main.go:141] libmachine: (test-preload-778713) Reserving static IP address...
I0321 22:33:34.279156 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has current primary IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.279461 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "test-preload-778713", mac: "52:54:00:24:1d:09", ip: "192.168.39.129"} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.279485 79998 main.go:141] libmachine: (test-preload-778713) DBG | skip adding static IP to network mk-test-preload-778713 - found existing host DHCP lease matching {name: "test-preload-778713", mac: "52:54:00:24:1d:09", ip: "192.168.39.129"}
I0321 22:33:34.279496 79998 main.go:141] libmachine: (test-preload-778713) Reserved static IP address: 192.168.39.129
I0321 22:33:34.279510 79998 main.go:141] libmachine: (test-preload-778713) Waiting for SSH to be available...
I0321 22:33:34.279530 79998 main.go:141] libmachine: (test-preload-778713) DBG | Getting to WaitForSSH function...
I0321 22:33:34.281473 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.281768 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.281800 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.281905 79998 main.go:141] libmachine: (test-preload-778713) DBG | Using SSH client type: external
I0321 22:33:34.281930 79998 main.go:141] libmachine: (test-preload-778713) DBG | Using SSH private key: /home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa (-rw-------)
I0321 22:33:34.281960 79998 main.go:141] libmachine: (test-preload-778713) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa -p 22] /usr/bin/ssh <nil>}
I0321 22:33:34.281982 79998 main.go:141] libmachine: (test-preload-778713) DBG | About to run SSH command:
I0321 22:33:34.281996 79998 main.go:141] libmachine: (test-preload-778713) DBG | exit 0
I0321 22:33:34.377727 79998 main.go:141] libmachine: (test-preload-778713) DBG | SSH cmd err, output: <nil>:
I0321 22:33:34.378014 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetConfigRaw
I0321 22:33:34.378614 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetIP
I0321 22:33:34.380806 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.381087 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.381112 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.381388 79998 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/config.json ...
I0321 22:33:34.381571 79998 machine.go:88] provisioning docker machine ...
I0321 22:33:34.381593 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:34.381798 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetMachineName
I0321 22:33:34.381942 79998 buildroot.go:166] provisioning hostname "test-preload-778713"
I0321 22:33:34.381964 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetMachineName
I0321 22:33:34.382116 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:33:34.384279 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.384596 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.384628 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.384703 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:33:34.384880 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:34.385015 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:34.385140 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:33:34.385280 79998 main.go:141] libmachine: Using SSH client type: native
I0321 22:33:34.385718 79998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil> [] 0s} 192.168.39.129 22 <nil> <nil>}
I0321 22:33:34.385735 79998 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-778713 && echo "test-preload-778713" | sudo tee /etc/hostname
I0321 22:33:34.527761 79998 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-778713
I0321 22:33:34.527794 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:33:34.530290 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.530630 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.530668 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.530774 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:33:34.530966 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:34.531121 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:34.531264 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:33:34.531417 79998 main.go:141] libmachine: Using SSH client type: native
I0321 22:33:34.531852 79998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil> [] 0s} 192.168.39.129 22 <nil> <nil>}
I0321 22:33:34.531874 79998 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-778713' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-778713/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-778713' | sudo tee -a /etc/hosts;
fi
fi
I0321 22:33:34.669299 79998 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0321 22:33:34.669331 79998 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16124-57437/.minikube CaCertPath:/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16124-57437/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16124-57437/.minikube}
I0321 22:33:34.669355 79998 buildroot.go:174] setting up certificates
I0321 22:33:34.669378 79998 provision.go:83] configureAuth start
I0321 22:33:34.669393 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetMachineName
I0321 22:33:34.669624 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetIP
I0321 22:33:34.672015 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.672342 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.672384 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.672539 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:33:34.674619 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.674908 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.674939 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.675046 79998 provision.go:138] copyHostCerts
I0321 22:33:34.675102 79998 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-57437/.minikube/ca.pem, removing ...
I0321 22:33:34.675112 79998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.pem
I0321 22:33:34.675174 79998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16124-57437/.minikube/ca.pem (1082 bytes)
I0321 22:33:34.675251 79998 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-57437/.minikube/cert.pem, removing ...
I0321 22:33:34.675262 79998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-57437/.minikube/cert.pem
I0321 22:33:34.675291 79998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16124-57437/.minikube/cert.pem (1123 bytes)
I0321 22:33:34.675338 79998 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-57437/.minikube/key.pem, removing ...
I0321 22:33:34.675345 79998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-57437/.minikube/key.pem
I0321 22:33:34.675365 79998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16124-57437/.minikube/key.pem (1679 bytes)
I0321 22:33:34.675407 79998 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16124-57437/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca-key.pem org=jenkins.test-preload-778713 san=[192.168.39.129 192.168.39.129 localhost 127.0.0.1 minikube test-preload-778713]
I0321 22:33:34.789603 79998 provision.go:172] copyRemoteCerts
I0321 22:33:34.789653 79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0321 22:33:34.789670 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:33:34.791939 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.792226 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.792258 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.792391 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:33:34.792584 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:34.792779 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:33:34.792959 79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
I0321 22:33:34.887216 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0321 22:33:34.909407 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0321 22:33:34.931156 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0321 22:33:34.952834 79998 provision.go:86] duration metric: configureAuth took 283.442577ms
I0321 22:33:34.952857 79998 buildroot.go:189] setting minikube options for container-runtime
I0321 22:33:34.953031 79998 config.go:182] Loaded profile config "test-preload-778713": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0321 22:33:34.953045 79998 machine.go:91] provisioned docker machine in 571.461456ms
I0321 22:33:34.953055 79998 start.go:300] post-start starting for "test-preload-778713" (driver="kvm2")
I0321 22:33:34.953064 79998 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0321 22:33:34.953108 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:34.953394 79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0321 22:33:34.953433 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:33:34.956372 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.956690 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:34.956719 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:34.956947 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:33:34.957142 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:34.957329 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:33:34.957500 79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
I0321 22:33:35.052040 79998 ssh_runner.go:195] Run: cat /etc/os-release
I0321 22:33:35.056208 79998 info.go:137] Remote host: Buildroot 2021.02.12
I0321 22:33:35.056229 79998 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-57437/.minikube/addons for local assets ...
I0321 22:33:35.056289 79998 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-57437/.minikube/files for local assets ...
I0321 22:33:35.056362 79998 filesync.go:149] local asset: /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/644982.pem -> 644982.pem in /etc/ssl/certs
I0321 22:33:35.056440 79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0321 22:33:35.065052 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/644982.pem --> /etc/ssl/certs/644982.pem (1708 bytes)
I0321 22:33:35.086967 79998 start.go:303] post-start completed in 133.899031ms
I0321 22:33:35.086984 79998 fix.go:57] fixHost completed within 19.081203401s
I0321 22:33:35.087007 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:33:35.089478 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:35.089809 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:35.089849 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:35.090024 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:33:35.090218 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:35.090388 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:35.090580 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:33:35.090748 79998 main.go:141] libmachine: Using SSH client type: native
I0321 22:33:35.091157 79998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil> [] 0s} 192.168.39.129 22 <nil> <nil>}
I0321 22:33:35.091171 79998 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0321 22:33:35.218625 79998 main.go:141] libmachine: SSH cmd err, output: <nil>: 1679438015.168336345
I0321 22:33:35.218663 79998 fix.go:207] guest clock: 1679438015.168336345
I0321 22:33:35.218674 79998 fix.go:220] Guest: 2023-03-21 22:33:35.168336345 +0000 UTC Remote: 2023-03-21 22:33:35.086987671 +0000 UTC m=+23.322213811 (delta=81.348674ms)
I0321 22:33:35.218700 79998 fix.go:191] guest clock delta is within tolerance: 81.348674ms
I0321 22:33:35.218711 79998 start.go:83] releasing machines lock for "test-preload-778713", held for 19.212938969s
I0321 22:33:35.218735 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:35.219015 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetIP
I0321 22:33:35.221405 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:35.221868 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:35.221905 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:35.221967 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:35.222482 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:35.222642 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:33:35.222734 79998 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0321 22:33:35.222770 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:33:35.222899 79998 ssh_runner.go:195] Run: cat /version.json
I0321 22:33:35.222933 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:33:35.225233 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:35.225478 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:35.225608 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:35.225637 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:35.225773 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:33:35.225895 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:35.225922 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:35.225925 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:35.226017 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:33:35.226090 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:33:35.226163 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:33:35.226214 79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
I0321 22:33:35.226298 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:33:35.226437 79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
I0321 22:33:35.338814 79998 ssh_runner.go:195] Run: systemctl --version
I0321 22:33:35.344215 79998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0321 22:33:35.349734 79998 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0321 22:33:35.349787 79998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0321 22:33:35.364702 79998 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0321 22:33:35.364719 79998 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0321 22:33:35.364801 79998 ssh_runner.go:195] Run: sudo crictl images --output json
I0321 22:33:39.401849 79998 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.037018637s)
I0321 22:33:39.401965 79998 containerd.go:606] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0321 22:33:39.402031 79998 ssh_runner.go:195] Run: which lz4
I0321 22:33:39.406543 79998 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0321 22:33:39.410932 79998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0321 22:33:39.410992 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0321 22:33:41.210780 79998 containerd.go:553] Took 1.804267 seconds to copy over tarball
I0321 22:33:41.210855 79998 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0321 22:33:44.258183 79998 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.047301821s)
I0321 22:33:44.258210 79998 containerd.go:560] Took 3.047402 seconds to extract the tarball
I0321 22:33:44.258219 79998 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0321 22:33:44.298745 79998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0321 22:33:44.390728 79998 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0321 22:33:44.407273 79998 start.go:485] detecting cgroup driver to use...
I0321 22:33:44.407344 79998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0321 22:33:47.102820 79998 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.695445612s)
I0321 22:33:47.102894 79998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0321 22:33:47.115624 79998 docker.go:186] disabling cri-docker service (if available) ...
I0321 22:33:47.115671 79998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0321 22:33:47.126668 79998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0321 22:33:47.137884 79998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0321 22:33:47.231644 79998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0321 22:33:47.333961 79998 docker.go:202] disabling docker service ...
I0321 22:33:47.334023 79998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0321 22:33:47.346635 79998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0321 22:33:47.357603 79998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0321 22:33:47.457503 79998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0321 22:33:47.562200 79998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0321 22:33:47.574112 79998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0321 22:33:47.590414 79998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
I0321 22:33:47.600005 79998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0321 22:33:47.609401 79998 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0321 22:33:47.609442 79998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0321 22:33:47.620175 79998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0321 22:33:47.631405 79998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0321 22:33:47.640756 79998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0321 22:33:47.650177 79998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0321 22:33:47.660193 79998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0321 22:33:47.669789 79998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0321 22:33:47.678285 79998 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0321 22:33:47.678327 79998 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0321 22:33:47.691556 79998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0321 22:33:47.700301 79998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0321 22:33:47.798490 79998 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0321 22:33:47.821937 79998 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
I0321 22:33:47.822001 79998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0321 22:33:47.827152 79998 retry.go:31] will retry after 692.932342ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0321 22:33:48.521216 79998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0321 22:33:48.526578 79998 start.go:553] Will wait 60s for crictl version
I0321 22:33:48.526630 79998 ssh_runner.go:195] Run: which crictl
I0321 22:33:48.530368 79998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0321 22:33:48.562922 79998 start.go:569] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.19
RuntimeApiVersion: v1alpha2
I0321 22:33:48.562969 79998 ssh_runner.go:195] Run: containerd --version
I0321 22:33:48.592179 79998 ssh_runner.go:195] Run: containerd --version
I0321 22:33:48.626446 79998 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.19 ...
I0321 22:33:48.627691 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetIP
I0321 22:33:48.630171 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:48.630491 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:33:48.630519 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:33:48.630749 79998 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0321 22:33:48.634646 79998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0321 22:33:48.646216 79998 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0321 22:33:48.646305 79998 ssh_runner.go:195] Run: sudo crictl images --output json
I0321 22:33:48.673713 79998 containerd.go:610] all images are preloaded for containerd runtime.
I0321 22:33:48.673734 79998 containerd.go:524] Images already preloaded, skipping extraction
I0321 22:33:48.673775 79998 ssh_runner.go:195] Run: sudo crictl images --output json
I0321 22:33:48.700322 79998 containerd.go:610] all images are preloaded for containerd runtime.
I0321 22:33:48.700344 79998 cache_images.go:84] Images are preloaded, skipping loading
I0321 22:33:48.700383 79998 ssh_runner.go:195] Run: sudo crictl info
I0321 22:33:48.727914 79998 cni.go:84] Creating CNI manager for ""
I0321 22:33:48.727938 79998 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0321 22:33:48.727962 79998 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0321 22:33:48.727980 79998 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-778713 NodeName:test-preload-778713 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0321 22:33:48.728090 79998 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.129
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-778713"
kubeletExtraArgs:
node-ip: 192.168.39.129
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0321 22:33:48.728164 79998 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-778713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-778713 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0321 22:33:48.728211 79998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0321 22:33:48.737123 79998 binaries.go:44] Found k8s binaries, skipping transfer
I0321 22:33:48.737169 79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0321 22:33:48.745695 79998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (393 bytes)
I0321 22:33:48.760844 79998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0321 22:33:48.775300 79998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
I0321 22:33:48.789961 79998 ssh_runner.go:195] Run: grep 192.168.39.129 control-plane.minikube.internal$ /etc/hosts
I0321 22:33:48.793364 79998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0321 22:33:48.804167 79998 certs.go:56] Setting up /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713 for IP: 192.168.39.129
I0321 22:33:48.804195 79998 certs.go:186] acquiring lock for shared ca certs: {Name:mkac58eaa17acb86160b42b722a075f3da28a096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0321 22:33:48.804345 79998 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.key
I0321 22:33:48.804382 79998 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16124-57437/.minikube/proxy-client-ca.key
I0321 22:33:48.804452 79998 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.key
I0321 22:33:48.804509 79998 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/apiserver.key.9233f9e0
I0321 22:33:48.804546 79998 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/proxy-client.key
I0321 22:33:48.804642 79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/64498.pem (1338 bytes)
W0321 22:33:48.804667 79998 certs.go:397] ignoring /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/64498_empty.pem, impossibly tiny 0 bytes
I0321 22:33:48.804678 79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca-key.pem (1679 bytes)
I0321 22:33:48.804705 79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem (1082 bytes)
I0321 22:33:48.804730 79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/cert.pem (1123 bytes)
I0321 22:33:48.804752 79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/key.pem (1679 bytes)
I0321 22:33:48.804793 79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/644982.pem (1708 bytes)
I0321 22:33:48.805312 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0321 22:33:48.826201 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0321 22:33:48.846767 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0321 22:33:48.867693 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0321 22:33:48.888542 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0321 22:33:48.909457 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0321 22:33:48.930110 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0321 22:33:48.951210 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0321 22:33:48.972018 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0321 22:33:48.992468 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/certs/64498.pem --> /usr/share/ca-certificates/64498.pem (1338 bytes)
I0321 22:33:49.013116 79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/644982.pem --> /usr/share/ca-certificates/644982.pem (1708 bytes)
I0321 22:33:49.033767 79998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0321 22:33:49.048517 79998 ssh_runner.go:195] Run: openssl version
I0321 22:33:49.053687 79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64498.pem && ln -fs /usr/share/ca-certificates/64498.pem /etc/ssl/certs/64498.pem"
I0321 22:33:49.063325 79998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64498.pem
I0321 22:33:49.067623 79998 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 21 21:55 /usr/share/ca-certificates/64498.pem
I0321 22:33:49.067661 79998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64498.pem
I0321 22:33:49.072537 79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/64498.pem /etc/ssl/certs/51391683.0"
I0321 22:33:49.081936 79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/644982.pem && ln -fs /usr/share/ca-certificates/644982.pem /etc/ssl/certs/644982.pem"
I0321 22:33:49.091443 79998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/644982.pem
I0321 22:33:49.095719 79998 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 21 21:55 /usr/share/ca-certificates/644982.pem
I0321 22:33:49.095754 79998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/644982.pem
I0321 22:33:49.100681 79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/644982.pem /etc/ssl/certs/3ec20f2e.0"
I0321 22:33:49.110305 79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0321 22:33:49.119933 79998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0321 22:33:49.124001 79998 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
I0321 22:33:49.124031 79998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0321 22:33:49.129072 79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0321 22:33:49.138648 79998 kubeadm.go:401] StartCluster: {Name:test-preload-778713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-778713 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0321 22:33:49.138747 79998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0321 22:33:49.138785 79998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0321 22:33:49.165591 79998 cri.go:87] found id: ""
I0321 22:33:49.165635 79998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0321 22:33:49.174203 79998 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0321 22:33:49.174216 79998 kubeadm.go:633] restartCluster start
I0321 22:33:49.174253 79998 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0321 22:33:49.182484 79998 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0321 22:33:49.182930 79998 kubeconfig.go:135] verify returned: extract IP: "test-preload-778713" does not appear in /home/jenkins/minikube-integration/16124-57437/kubeconfig
I0321 22:33:49.183065 79998 kubeconfig.go:146] "test-preload-778713" context is missing from /home/jenkins/minikube-integration/16124-57437/kubeconfig - will repair!
I0321 22:33:49.183303 79998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-57437/kubeconfig: {Name:mk8ee86e6b55120ac24d22c302b6f0547947acf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0321 22:33:49.183893 79998 kapi.go:59] client config for test-preload-778713: &rest.Config{Host:"https://192.168.39.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.key", CAFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0321 22:33:49.184685 79998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0321 22:33:49.193162 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:49.193205 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:49.203910 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:49.704560 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:49.704652 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:49.716040 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:50.204739 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:50.204841 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:50.216399 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:50.704081 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:50.704161 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:50.715569 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:51.204586 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:51.204656 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:51.215928 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:51.704456 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:51.704553 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:51.716501 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:52.204274 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:52.204353 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:52.215515 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:52.704074 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:52.704175 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:52.715901 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:53.204448 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:53.204543 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:53.216087 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:53.704692 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:53.704762 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:53.716111 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:54.204795 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:54.204893 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:54.216907 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:54.704488 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:54.704563 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:54.716013 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:55.204623 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:55.204698 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:55.215822 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:55.704365 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:55.704461 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:55.715844 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:56.204658 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:56.204737 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:56.216268 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:56.704859 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:56.704947 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:56.717204 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:57.204796 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:57.204882 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:57.216236 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:57.704891 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:57.704997 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:57.716601 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:58.204209 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:58.204298 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:58.215866 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:58.704408 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:58.704498 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:58.715978 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:59.204713 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:59.204787 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:59.215820 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:59.215840 79998 api_server.go:165] Checking apiserver status ...
I0321 22:33:59.215884 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0321 22:33:59.226641 79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0321 22:33:59.226665 79998 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0321 22:33:59.226671 79998 kubeadm.go:1120] stopping kube-system containers ...
I0321 22:33:59.226695 79998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0321 22:33:59.226746 79998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0321 22:33:59.254555 79998 cri.go:87] found id: ""
I0321 22:33:59.254619 79998 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0321 22:33:59.269468 79998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0321 22:33:59.277733 79998 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0321 22:33:59.277785 79998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0321 22:33:59.285731 79998 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0321 22:33:59.285747 79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0321 22:33:59.382863 79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0321 22:34:00.023599 79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0321 22:34:00.338109 79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0321 22:34:00.432527 79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0321 22:34:00.504048 79998 api_server.go:51] waiting for apiserver process to appear ...
I0321 22:34:00.504128 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:01.020577 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:01.520439 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:02.021144 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:02.520963 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:03.020423 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:03.521196 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:04.020474 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:04.521055 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:05.020388 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:05.520432 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:06.020341 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:06.520597 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:07.020897 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:07.520378 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:08.020738 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:08.520538 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:09.020273 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:09.521307 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:10.020559 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:10.521321 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:11.020940 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:11.521168 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:12.020457 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:12.520922 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:13.020911 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:13.520762 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:14.020679 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:14.521101 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:15.020444 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:15.521178 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:16.021120 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:16.520453 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:34:16.533579 79998 api_server.go:71] duration metric: took 16.02953311s to wait for apiserver process to appear ...
I0321 22:34:16.533602 79998 api_server.go:87] waiting for apiserver healthz status ...
I0321 22:34:16.533616 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:21.534194 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0321 22:34:22.035100 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:27.036188 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0321 22:34:27.534783 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:32.535848 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0321 22:34:33.034417 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:36.587602 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": read tcp 192.168.39.1:41520->192.168.39.129:8443: read: connection reset by peer
I0321 22:34:37.035156 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:37.035680 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:37.535337 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:37.535928 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:38.034602 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:38.035292 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:38.534947 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:38.535491 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:39.035098 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:39.035699 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:39.534512 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:39.535218 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:40.034801 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:40.035369 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:40.535084 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:40.535700 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:41.034396 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:41.035004 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:41.534951 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:41.535531 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:42.034545 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:42.035146 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:42.534720 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:42.535322 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:43.034945 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:43.035497 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:43.535112 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:43.535699 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:44.034372 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:44.034982 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:44.534528 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:44.535164 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:45.034749 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:45.035352 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:45.534964 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:45.535594 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:46.035248 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:46.035914 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:46.534828 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:46.535399 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:47.034934 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:47.035583 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:47.535273 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:47.535975 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:48.034560 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:48.035207 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:48.534748 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:48.535384 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:49.035174 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:49.035796 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:49.534663 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:49.535371 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:50.034994 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:50.035575 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:50.535193 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:50.535861 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:51.034406 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:51.034988 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:51.535085 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:51.535704 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:52.034612 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:52.035203 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:52.534746 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:52.535323 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:53.034978 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:53.035650 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:53.535260 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:53.535886 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:54.034462 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:54.035056 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:54.534593 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:54.535203 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:55.034757 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:55.035402 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:55.535045 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:55.535605 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:56.035257 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:56.035964 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:56.535085 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:56.535698 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:57.035361 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:57.035969 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:57.534528 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:57.535094 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:58.034641 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:34:58.035290 79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
I0321 22:34:58.534842 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:35:01.123844 79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0321 22:35:01.123872 79998 api_server.go:102] status: https://192.168.39.129:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0321 22:35:01.534401 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:35:01.540218 79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0321 22:35:01.540240 79998 api_server.go:102] status: https://192.168.39.129:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0321 22:35:02.034721 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:35:02.040713 79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0321 22:35:02.040739 79998 api_server.go:102] status: https://192.168.39.129:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0321 22:35:02.534330 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:35:02.540353 79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 200:
ok
I0321 22:35:02.547672 79998 api_server.go:140] control plane version: v1.24.4
I0321 22:35:02.547698 79998 api_server.go:130] duration metric: took 46.014088995s to wait for apiserver health ...
I0321 22:35:02.547712 79998 cni.go:84] Creating CNI manager for ""
I0321 22:35:02.547720 79998 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0321 22:35:02.549470 79998 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0321 22:35:02.550720 79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0321 22:35:02.561781 79998 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0321 22:35:02.580474 79998 system_pods.go:43] waiting for kube-system pods to appear ...
I0321 22:35:02.588242 79998 system_pods.go:59] 7 kube-system pods found
I0321 22:35:02.588267 79998 system_pods.go:61] "coredns-6d4b75cb6d-4zkrg" [9ba80daf-32d4-41a3-a1bd-7c8b3168a4db] Running
I0321 22:35:02.588275 79998 system_pods.go:61] "etcd-test-preload-778713" [ceeb8dba-f8d6-4d4b-ae99-3f8295266274] Running
I0321 22:35:02.588281 79998 system_pods.go:61] "kube-apiserver-test-preload-778713" [518a0d87-b51c-443f-8542-75e44a061897] Running
I0321 22:35:02.588288 79998 system_pods.go:61] "kube-controller-manager-test-preload-778713" [e5ef86be-1e24-4dd4-8934-d0c609c733f4] Running
I0321 22:35:02.588293 79998 system_pods.go:61] "kube-proxy-vdrfz" [42f3e5be-8516-465e-8d63-949a1de4a66d] Running
I0321 22:35:02.588306 79998 system_pods.go:61] "kube-scheduler-test-preload-778713" [932e8280-bfba-4a2d-912c-374f30a8cc37] Running
I0321 22:35:02.588313 79998 system_pods.go:61] "storage-provisioner" [15af5481-be73-4e4b-8d93-f78926fa2edf] Running
I0321 22:35:02.588320 79998 system_pods.go:74] duration metric: took 7.824362ms to wait for pod list to return data ...
I0321 22:35:02.588329 79998 node_conditions.go:102] verifying NodePressure condition ...
I0321 22:35:02.591408 79998 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0321 22:35:02.591433 79998 node_conditions.go:123] node cpu capacity is 2
I0321 22:35:02.591447 79998 node_conditions.go:105] duration metric: took 3.111739ms to run NodePressure ...
I0321 22:35:02.591465 79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0321 22:35:02.782046 79998 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0321 22:35:02.786049 79998 retry.go:31] will retry after 168.284477ms: kubelet not initialised
I0321 22:35:02.959784 79998 retry.go:31] will retry after 405.745497ms: kubelet not initialised
I0321 22:35:03.370937 79998 retry.go:31] will retry after 689.497642ms: kubelet not initialised
I0321 22:35:04.065310 79998 retry.go:31] will retry after 1.025423078s: kubelet not initialised
I0321 22:35:05.097032 79998 retry.go:31] will retry after 1.195125094s: kubelet not initialised
I0321 22:35:06.298676 79998 retry.go:31] will retry after 1.772228539s: kubelet not initialised
I0321 22:35:08.078802 79998 retry.go:31] will retry after 3.395567739s: kubelet not initialised
I0321 22:35:11.483486 79998 retry.go:31] will retry after 4.378086122s: kubelet not initialised
I0321 22:35:15.869890 79998 retry.go:31] will retry after 6.120616139s: kubelet not initialised
I0321 22:35:21.996055 79998 kubeadm.go:784] kubelet initialised
I0321 22:35:21.996080 79998 kubeadm.go:785] duration metric: took 19.214013885s waiting for restarted kubelet to initialise ...
I0321 22:35:21.996088 79998 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0321 22:35:22.001693 79998 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace to be "Ready" ...
I0321 22:35:24.015470 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:26.513887 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:29.013017 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:31.013502 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:33.015110 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:35.515475 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:38.016882 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:40.513881 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:43.014103 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:45.015375 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:47.514276 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:49.514878 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:51.515303 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:54.014711 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:56.515662 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:35:59.013583 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:36:01.515599 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:36:04.014655 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:36:06.514208 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:36:08.515354 79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
I0321 22:36:09.516135 79998 pod_ready.go:92] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:09.516163 79998 pod_ready.go:81] duration metric: took 47.514446645s waiting for pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.516174 79998 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.521246 79998 pod_ready.go:92] pod "etcd-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:09.521263 79998 pod_ready.go:81] duration metric: took 5.083367ms waiting for pod "etcd-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.521271 79998 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.525765 79998 pod_ready.go:92] pod "kube-apiserver-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:09.525789 79998 pod_ready.go:81] duration metric: took 4.509946ms waiting for pod "kube-apiserver-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.525801 79998 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.530302 79998 pod_ready.go:92] pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:09.530322 79998 pod_ready.go:81] duration metric: took 4.512556ms waiting for pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.530334 79998 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vdrfz" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.534782 79998 pod_ready.go:92] pod "kube-proxy-vdrfz" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:09.534799 79998 pod_ready.go:81] duration metric: took 4.458247ms waiting for pod "kube-proxy-vdrfz" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.534807 79998 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.915118 79998 pod_ready.go:92] pod "kube-scheduler-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:09.915146 79998 pod_ready.go:81] duration metric: took 380.33221ms waiting for pod "kube-scheduler-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:09.915161 79998 pod_ready.go:38] duration metric: took 47.919062912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0321 22:36:09.915186 79998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0321 22:36:09.928182 79998 ops.go:34] apiserver oom_adj: -16
I0321 22:36:09.928205 79998 kubeadm.go:637] restartCluster took 2m20.753981878s
I0321 22:36:09.928215 79998 kubeadm.go:403] StartCluster complete in 2m20.789574221s
I0321 22:36:09.928237 79998 settings.go:142] acquiring lock: {Name:mk79799ddbbfcee95eba9c02d869416a2516522c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0321 22:36:09.928365 79998 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16124-57437/kubeconfig
I0321 22:36:09.929176 79998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-57437/kubeconfig: {Name:mk8ee86e6b55120ac24d22c302b6f0547947acf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0321 22:36:09.929448 79998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0321 22:36:09.929596 79998 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0321 22:36:09.929698 79998 addons.go:66] Setting storage-provisioner=true in profile "test-preload-778713"
I0321 22:36:09.929721 79998 addons.go:228] Setting addon storage-provisioner=true in "test-preload-778713"
W0321 22:36:09.929728 79998 addons.go:237] addon storage-provisioner should already be in state true
I0321 22:36:09.929722 79998 config.go:182] Loaded profile config "test-preload-778713": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0321 22:36:09.929745 79998 addons.go:66] Setting default-storageclass=true in profile "test-preload-778713"
I0321 22:36:09.929781 79998 host.go:66] Checking if "test-preload-778713" exists ...
I0321 22:36:09.929784 79998 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-778713"
I0321 22:36:09.930069 79998 kapi.go:59] client config for test-preload-778713: &rest.Config{Host:"https://192.168.39.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.key", CAFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0321 22:36:09.930219 79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0321 22:36:09.930294 79998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0321 22:36:09.930399 79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0321 22:36:09.930450 79998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0321 22:36:09.933521 79998 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-778713" context rescaled to 1 replicas
I0321 22:36:09.933570 79998 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0321 22:36:09.937056 79998 out.go:177] * Verifying Kubernetes components...
I0321 22:36:09.938444 79998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0321 22:36:09.945989 79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39447
I0321 22:36:09.946018 79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46659
I0321 22:36:09.946422 79998 main.go:141] libmachine: () Calling .GetVersion
I0321 22:36:09.946455 79998 main.go:141] libmachine: () Calling .GetVersion
I0321 22:36:09.946953 79998 main.go:141] libmachine: Using API Version 1
I0321 22:36:09.946982 79998 main.go:141] libmachine: () Calling .SetConfigRaw
I0321 22:36:09.947093 79998 main.go:141] libmachine: Using API Version 1
I0321 22:36:09.947114 79998 main.go:141] libmachine: () Calling .SetConfigRaw
I0321 22:36:09.947328 79998 main.go:141] libmachine: () Calling .GetMachineName
I0321 22:36:09.947458 79998 main.go:141] libmachine: () Calling .GetMachineName
I0321 22:36:09.947685 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetState
I0321 22:36:09.947841 79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0321 22:36:09.947888 79998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0321 22:36:09.950157 79998 kapi.go:59] client config for test-preload-778713: &rest.Config{Host:"https://192.168.39.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.key", CAFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0321 22:36:09.958884 79998 addons.go:228] Setting addon default-storageclass=true in "test-preload-778713"
W0321 22:36:09.958912 79998 addons.go:237] addon default-storageclass should already be in state true
I0321 22:36:09.958942 79998 host.go:66] Checking if "test-preload-778713" exists ...
I0321 22:36:09.959317 79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0321 22:36:09.959344 79998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0321 22:36:09.967028 79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
I0321 22:36:09.967509 79998 main.go:141] libmachine: () Calling .GetVersion
I0321 22:36:09.968079 79998 main.go:141] libmachine: Using API Version 1
I0321 22:36:09.968108 79998 main.go:141] libmachine: () Calling .SetConfigRaw
I0321 22:36:09.968513 79998 main.go:141] libmachine: () Calling .GetMachineName
I0321 22:36:09.968747 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetState
I0321 22:36:09.970699 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:36:09.973594 79998 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0321 22:36:09.975304 79998 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0321 22:36:09.975327 79998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0321 22:36:09.975350 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:36:09.976403 79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
I0321 22:36:09.976821 79998 main.go:141] libmachine: () Calling .GetVersion
I0321 22:36:09.977414 79998 main.go:141] libmachine: Using API Version 1
I0321 22:36:09.977440 79998 main.go:141] libmachine: () Calling .SetConfigRaw
I0321 22:36:09.977773 79998 main.go:141] libmachine: () Calling .GetMachineName
I0321 22:36:09.978383 79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0321 22:36:09.978413 79998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0321 22:36:09.979084 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:36:09.979567 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:36:09.979599 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:36:09.979747 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:36:09.979959 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:36:09.980130 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:36:09.980275 79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
I0321 22:36:09.992945 79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
I0321 22:36:09.993352 79998 main.go:141] libmachine: () Calling .GetVersion
I0321 22:36:09.993830 79998 main.go:141] libmachine: Using API Version 1
I0321 22:36:09.993849 79998 main.go:141] libmachine: () Calling .SetConfigRaw
I0321 22:36:09.994176 79998 main.go:141] libmachine: () Calling .GetMachineName
I0321 22:36:09.994414 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetState
I0321 22:36:09.995852 79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
I0321 22:36:09.996134 79998 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
I0321 22:36:09.996151 79998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0321 22:36:09.996166 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
I0321 22:36:09.998970 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:36:09.999442 79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
I0321 22:36:09.999477 79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
I0321 22:36:09.999603 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
I0321 22:36:09.999761 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
I0321 22:36:09.999899 79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
I0321 22:36:10.000056 79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
I0321 22:36:10.103266 79998 node_ready.go:35] waiting up to 6m0s for node "test-preload-778713" to be "Ready" ...
I0321 22:36:10.103306 79998 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0321 22:36:10.111502 79998 node_ready.go:49] node "test-preload-778713" has status "Ready":"True"
I0321 22:36:10.111523 79998 node_ready.go:38] duration metric: took 8.222478ms waiting for node "test-preload-778713" to be "Ready" ...
I0321 22:36:10.111531 79998 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0321 22:36:10.125083 79998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0321 22:36:10.126283 79998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0321 22:36:10.314803 79998 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace to be "Ready" ...
I0321 22:36:10.713322 79998 pod_ready.go:92] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:10.713359 79998 pod_ready.go:81] duration metric: took 398.525046ms waiting for pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace to be "Ready" ...
I0321 22:36:10.713373 79998 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:10.887489 79998 main.go:141] libmachine: Making call to close driver server
I0321 22:36:10.887526 79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
I0321 22:36:10.887924 79998 main.go:141] libmachine: (test-preload-778713) DBG | Closing plugin on server side
I0321 22:36:10.888007 79998 main.go:141] libmachine: Successfully made call to close driver server
I0321 22:36:10.888032 79998 main.go:141] libmachine: Making call to close connection to plugin binary
I0321 22:36:10.888049 79998 main.go:141] libmachine: Making call to close driver server
I0321 22:36:10.888067 79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
I0321 22:36:10.888330 79998 main.go:141] libmachine: Successfully made call to close driver server
I0321 22:36:10.888353 79998 main.go:141] libmachine: Making call to close connection to plugin binary
I0321 22:36:10.888359 79998 main.go:141] libmachine: (test-preload-778713) DBG | Closing plugin on server side
I0321 22:36:10.888371 79998 main.go:141] libmachine: Making call to close driver server
I0321 22:36:10.888383 79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
I0321 22:36:10.888603 79998 main.go:141] libmachine: Successfully made call to close driver server
I0321 22:36:10.888620 79998 main.go:141] libmachine: Making call to close connection to plugin binary
I0321 22:36:10.965700 79998 main.go:141] libmachine: Making call to close driver server
I0321 22:36:10.965724 79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
I0321 22:36:10.966018 79998 main.go:141] libmachine: Successfully made call to close driver server
I0321 22:36:10.966038 79998 main.go:141] libmachine: Making call to close connection to plugin binary
I0321 22:36:10.966065 79998 main.go:141] libmachine: Making call to close driver server
I0321 22:36:10.966075 79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
I0321 22:36:10.966137 79998 main.go:141] libmachine: (test-preload-778713) DBG | Closing plugin on server side
I0321 22:36:10.966306 79998 main.go:141] libmachine: Successfully made call to close driver server
I0321 22:36:10.966324 79998 main.go:141] libmachine: Making call to close connection to plugin binary
I0321 22:36:10.966326 79998 main.go:141] libmachine: (test-preload-778713) DBG | Closing plugin on server side
I0321 22:36:10.968749 79998 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0321 22:36:10.970172 79998 addons.go:499] enable addons completed in 1.040575575s: enabled=[default-storageclass storage-provisioner]
I0321 22:36:11.111117 79998 pod_ready.go:92] pod "etcd-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:11.111133 79998 pod_ready.go:81] duration metric: took 397.751491ms waiting for pod "etcd-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:11.111142 79998 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:11.512217 79998 pod_ready.go:92] pod "kube-apiserver-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:11.512237 79998 pod_ready.go:81] duration metric: took 401.08831ms waiting for pod "kube-apiserver-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:11.512247 79998 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:11.911787 79998 pod_ready.go:92] pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:11.911808 79998 pod_ready.go:81] duration metric: took 399.554216ms waiting for pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:11.911818 79998 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdrfz" in "kube-system" namespace to be "Ready" ...
I0321 22:36:12.311785 79998 pod_ready.go:92] pod "kube-proxy-vdrfz" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:12.311807 79998 pod_ready.go:81] duration metric: took 399.98271ms waiting for pod "kube-proxy-vdrfz" in "kube-system" namespace to be "Ready" ...
I0321 22:36:12.311817 79998 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:12.712237 79998 pod_ready.go:92] pod "kube-scheduler-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
I0321 22:36:12.712258 79998 pod_ready.go:81] duration metric: took 400.435232ms waiting for pod "kube-scheduler-test-preload-778713" in "kube-system" namespace to be "Ready" ...
I0321 22:36:12.712269 79998 pod_ready.go:38] duration metric: took 2.600726468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0321 22:36:12.712291 79998 api_server.go:51] waiting for apiserver process to appear ...
I0321 22:36:12.712332 79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0321 22:36:12.726796 79998 api_server.go:71] duration metric: took 2.79318534s to wait for apiserver process to appear ...
I0321 22:36:12.726828 79998 api_server.go:87] waiting for apiserver healthz status ...
I0321 22:36:12.726848 79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
I0321 22:36:12.732331 79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 200:
ok
I0321 22:36:12.733373 79998 api_server.go:140] control plane version: v1.24.4
I0321 22:36:12.733390 79998 api_server.go:130] duration metric: took 6.556349ms to wait for apiserver health ...
I0321 22:36:12.733397 79998 system_pods.go:43] waiting for kube-system pods to appear ...
I0321 22:36:12.914698 79998 system_pods.go:59] 7 kube-system pods found
I0321 22:36:12.914768 79998 system_pods.go:61] "coredns-6d4b75cb6d-4zkrg" [9ba80daf-32d4-41a3-a1bd-7c8b3168a4db] Running
I0321 22:36:12.914789 79998 system_pods.go:61] "etcd-test-preload-778713" [ceeb8dba-f8d6-4d4b-ae99-3f8295266274] Running
I0321 22:36:12.914796 79998 system_pods.go:61] "kube-apiserver-test-preload-778713" [518a0d87-b51c-443f-8542-75e44a061897] Running
I0321 22:36:12.914804 79998 system_pods.go:61] "kube-controller-manager-test-preload-778713" [e5ef86be-1e24-4dd4-8934-d0c609c733f4] Running
I0321 22:36:12.914810 79998 system_pods.go:61] "kube-proxy-vdrfz" [42f3e5be-8516-465e-8d63-949a1de4a66d] Running
I0321 22:36:12.914816 79998 system_pods.go:61] "kube-scheduler-test-preload-778713" [932e8280-bfba-4a2d-912c-374f30a8cc37] Running
I0321 22:36:12.914824 79998 system_pods.go:61] "storage-provisioner" [15af5481-be73-4e4b-8d93-f78926fa2edf] Running
I0321 22:36:12.914833 79998 system_pods.go:74] duration metric: took 181.42948ms to wait for pod list to return data ...
I0321 22:36:12.914853 79998 default_sa.go:34] waiting for default service account to be created ...
I0321 22:36:13.112376 79998 default_sa.go:45] found service account: "default"
I0321 22:36:13.112410 79998 default_sa.go:55] duration metric: took 197.549527ms for default service account to be created ...
I0321 22:36:13.112422 79998 system_pods.go:116] waiting for k8s-apps to be running ...
I0321 22:36:13.314614 79998 system_pods.go:86] 7 kube-system pods found
I0321 22:36:13.314643 79998 system_pods.go:89] "coredns-6d4b75cb6d-4zkrg" [9ba80daf-32d4-41a3-a1bd-7c8b3168a4db] Running
I0321 22:36:13.314650 79998 system_pods.go:89] "etcd-test-preload-778713" [ceeb8dba-f8d6-4d4b-ae99-3f8295266274] Running
I0321 22:36:13.314654 79998 system_pods.go:89] "kube-apiserver-test-preload-778713" [518a0d87-b51c-443f-8542-75e44a061897] Running
I0321 22:36:13.314659 79998 system_pods.go:89] "kube-controller-manager-test-preload-778713" [e5ef86be-1e24-4dd4-8934-d0c609c733f4] Running
I0321 22:36:13.314663 79998 system_pods.go:89] "kube-proxy-vdrfz" [42f3e5be-8516-465e-8d63-949a1de4a66d] Running
I0321 22:36:13.314667 79998 system_pods.go:89] "kube-scheduler-test-preload-778713" [932e8280-bfba-4a2d-912c-374f30a8cc37] Running
I0321 22:36:13.314671 79998 system_pods.go:89] "storage-provisioner" [15af5481-be73-4e4b-8d93-f78926fa2edf] Running
I0321 22:36:13.314678 79998 system_pods.go:126] duration metric: took 202.250278ms to wait for k8s-apps to be running ...
I0321 22:36:13.314684 79998 system_svc.go:44] waiting for kubelet service to be running ....
I0321 22:36:13.314746 79998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0321 22:36:13.329327 79998 system_svc.go:56] duration metric: took 14.630148ms WaitForService to wait for kubelet.
I0321 22:36:13.329356 79998 kubeadm.go:578] duration metric: took 3.395753535s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0321 22:36:13.329373 79998 node_conditions.go:102] verifying NodePressure condition ...
I0321 22:36:13.512138 79998 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0321 22:36:13.512167 79998 node_conditions.go:123] node cpu capacity is 2
I0321 22:36:13.512180 79998 node_conditions.go:105] duration metric: took 182.80253ms to run NodePressure ...
I0321 22:36:13.512194 79998 start.go:228] waiting for startup goroutines ...
I0321 22:36:13.512203 79998 start.go:233] waiting for cluster config update ...
I0321 22:36:13.512215 79998 start.go:242] writing updated cluster config ...
I0321 22:36:13.512488 79998 ssh_runner.go:195] Run: rm -f paused
I0321 22:36:13.563797 79998 start.go:554] kubectl: 1.26.3, cluster: 1.24.4 (minor skew: 2)
I0321 22:36:13.566354 79998 out.go:177]
W0321 22:36:13.568003 79998 out.go:239] ! /usr/local/bin/kubectl is version 1.26.3, which may have incompatibilities with Kubernetes 1.24.4.
I0321 22:36:13.569606 79998 out.go:177] - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
I0321 22:36:13.571390 79998 out.go:177] * Done! kubectl is now configured to use "test-preload-778713" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
df1913d11540e 6e38f40d628db 13 seconds ago Running storage-provisioner 2 34b0462712c65
0f12728de096e 7a53d1e08ef58 39 seconds ago Running kube-proxy 1 7228cd6b24fb9
e94509bc97bb8 a4ca41631cc7a 40 seconds ago Running coredns 1 1275e00773946
2dea0b199e13b 6e38f40d628db 44 seconds ago Exited storage-provisioner 1 34b0462712c65
a194d126ab9a4 1f99cb6da9a82 About a minute ago Running kube-controller-manager 3 32219b621e38c
f7f98bc5b364e 6cab9d1bed1be About a minute ago Running kube-apiserver 2 1b151d4da505f
a78d6bfd8f6b7 aebe758cef4cd About a minute ago Running etcd 1 d47aa1bedc931
c8312b60e7fce 03fa22539fc1c About a minute ago Running kube-scheduler 1 20be83637ffe5
ec92b2c00d9b2 6cab9d1bed1be About a minute ago Exited kube-apiserver 1 1b151d4da505f
e44bf4ae4d833 1f99cb6da9a82 2 minutes ago Exited kube-controller-manager 2 32219b621e38c
*
* ==> containerd <==
* -- Journal begins at Tue 2023-03-21 22:33:26 UTC, ends at Tue 2023-03-21 22:36:14 UTC. --
Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.744471769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.744482983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.744929648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7228cd6b24fb98a717c3d424641f2941617e641dc5512e0fa13c2973d7497ef4 pid=1580 runtime=io.containerd.runc.v2
Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.851124326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdrfz,Uid:42f3e5be-8516-465e-8d63-949a1de4a66d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7228cd6b24fb98a717c3d424641f2941617e641dc5512e0fa13c2973d7497ef4\""
Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.904554625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-4zkrg,Uid:9ba80daf-32d4-41a3-a1bd-7c8b3168a4db,Namespace:kube-system,Attempt:0,} returns sandbox id \"1275e00773946cb87910f4ca87357e11a09502fb1fa490ab80c223995fffbd17\""
Mar 21 22:35:30 test-preload-778713 containerd[632]: time="2023-03-21T22:35:30.567436570Z" level=info msg="CreateContainer within sandbox \"34b0462712c65fef060756cd10c7b3fbff8e9eeec06448dee53e8cb50d9cd270\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
Mar 21 22:35:30 test-preload-778713 containerd[632]: time="2023-03-21T22:35:30.601721750Z" level=info msg="CreateContainer within sandbox \"34b0462712c65fef060756cd10c7b3fbff8e9eeec06448dee53e8cb50d9cd270\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380\""
Mar 21 22:35:30 test-preload-778713 containerd[632]: time="2023-03-21T22:35:30.602872726Z" level=info msg="StartContainer for \"2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380\""
Mar 21 22:35:30 test-preload-778713 containerd[632]: time="2023-03-21T22:35:30.683003584Z" level=info msg="StartContainer for \"2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380\" returns successfully"
Mar 21 22:35:34 test-preload-778713 containerd[632]: time="2023-03-21T22:35:34.563102766Z" level=info msg="CreateContainer within sandbox \"1275e00773946cb87910f4ca87357e11a09502fb1fa490ab80c223995fffbd17\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Mar 21 22:35:34 test-preload-778713 containerd[632]: time="2023-03-21T22:35:34.594039077Z" level=info msg="CreateContainer within sandbox \"1275e00773946cb87910f4ca87357e11a09502fb1fa490ab80c223995fffbd17\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e94509bc97bb8813165cc35eeea89ef83092abc5402cf68fff700bd290208087\""
Mar 21 22:35:34 test-preload-778713 containerd[632]: time="2023-03-21T22:35:34.595507604Z" level=info msg="StartContainer for \"e94509bc97bb8813165cc35eeea89ef83092abc5402cf68fff700bd290208087\""
Mar 21 22:35:34 test-preload-778713 containerd[632]: time="2023-03-21T22:35:34.670423789Z" level=info msg="StartContainer for \"e94509bc97bb8813165cc35eeea89ef83092abc5402cf68fff700bd290208087\" returns successfully"
Mar 21 22:35:35 test-preload-778713 containerd[632]: time="2023-03-21T22:35:35.562911796Z" level=info msg="CreateContainer within sandbox \"7228cd6b24fb98a717c3d424641f2941617e641dc5512e0fa13c2973d7497ef4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
Mar 21 22:35:35 test-preload-778713 containerd[632]: time="2023-03-21T22:35:35.597063027Z" level=info msg="CreateContainer within sandbox \"7228cd6b24fb98a717c3d424641f2941617e641dc5512e0fa13c2973d7497ef4\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"0f12728de096e667d674a130e2a3f8da71dd487711f3760da7c4fc971840a61e\""
Mar 21 22:35:35 test-preload-778713 containerd[632]: time="2023-03-21T22:35:35.598152367Z" level=info msg="StartContainer for \"0f12728de096e667d674a130e2a3f8da71dd487711f3760da7c4fc971840a61e\""
Mar 21 22:35:35 test-preload-778713 containerd[632]: time="2023-03-21T22:35:35.681233194Z" level=info msg="StartContainer for \"0f12728de096e667d674a130e2a3f8da71dd487711f3760da7c4fc971840a61e\" returns successfully"
Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.768385402Z" level=info msg="shim disconnected" id=2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380
Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.768850813Z" level=warning msg="cleaning up after shim disconnected" id=2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380 namespace=k8s.io
Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.768930663Z" level=info msg="cleaning up dead shim"
Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.782029557Z" level=warning msg="cleanup warnings time=\"2023-03-21T22:36:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1811 runtime=io.containerd.runc.v2\n"
Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.903366391Z" level=info msg="CreateContainer within sandbox \"34b0462712c65fef060756cd10c7b3fbff8e9eeec06448dee53e8cb50d9cd270\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.930968669Z" level=info msg="CreateContainer within sandbox \"34b0462712c65fef060756cd10c7b3fbff8e9eeec06448dee53e8cb50d9cd270\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"df1913d11540e787c62cdc0bbf163830e6e55ce53408e6dfd9a8e30ff343be7f\""
Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.932718936Z" level=info msg="StartContainer for \"df1913d11540e787c62cdc0bbf163830e6e55ce53408e6dfd9a8e30ff343be7f\""
Mar 21 22:36:01 test-preload-778713 containerd[632]: time="2023-03-21T22:36:01.041718040Z" level=info msg="StartContainer for \"df1913d11540e787c62cdc0bbf163830e6e55ce53408e6dfd9a8e30ff343be7f\" returns successfully"
*
* ==> coredns [e94509bc97bb8813165cc35eeea89ef83092abc5402cf68fff700bd290208087] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] 127.0.0.1:46870 - 17389 "HINFO IN 7495271272311143749.5675818494558930897. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02778266s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
*
* ==> describe nodes <==
* Name: test-preload-778713
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-778713
kubernetes.io/os=linux
minikube.k8s.io/commit=8b6238450160ebd3d5010da9938125282f0eedd4
minikube.k8s.io/name=test-preload-778713
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_21T22_30_44_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 21 Mar 2023 22:30:41 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-778713
AcquireTime: <unset>
RenewTime: Tue, 21 Mar 2023 22:36:13 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 21 Mar 2023 22:35:11 +0000 Tue, 21 Mar 2023 22:30:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 21 Mar 2023 22:35:11 +0000 Tue, 21 Mar 2023 22:30:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 21 Mar 2023 22:35:11 +0000 Tue, 21 Mar 2023 22:30:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 21 Mar 2023 22:35:11 +0000 Tue, 21 Mar 2023 22:35:11 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.129
Hostname: test-preload-778713
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: a1673b7de8cd4a05a5e2677840a0d26a
System UUID: a1673b7d-e8cd-4a05-a5e2-677840a0d26a
Boot ID: a53f9495-c984-4ebe-8894-b22bef74aacb
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.19
Kubelet Version: v1.24.4
Kube-Proxy Version: v1.24.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-4zkrg 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 5m17s
kube-system etcd-test-preload-778713 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 5m29s
kube-system kube-apiserver-test-preload-778713 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m30s
kube-system kube-controller-manager-test-preload-778713 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m30s
kube-system kube-proxy-vdrfz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m18s
kube-system kube-scheduler-test-preload-778713 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m30s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m15s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 38s kube-proxy
Normal Starting 5m15s kube-proxy
Normal Starting 5m30s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m30s kubelet Node test-preload-778713 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m30s kubelet Node test-preload-778713 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m30s kubelet Node test-preload-778713 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m30s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 5m20s kubelet Node test-preload-778713 status is now: NodeReady
Normal RegisteredNode 5m18s node-controller Node test-preload-778713 event: Registered Node test-preload-778713 in Controller
Normal Starting 2m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m14s (x8 over 2m14s) kubelet Node test-preload-778713 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m14s (x8 over 2m14s) kubelet Node test-preload-778713 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m14s (x7 over 2m14s) kubelet Node test-preload-778713 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m14s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 61s node-controller Node test-preload-778713 event: Registered Node test-preload-778713 in Controller
*
* ==> dmesg <==
* [Mar21 22:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.070350] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.930656] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.187266] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.138912] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.498163] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
[ +15.211172] systemd-fstab-generator[528]: Ignoring "noauto" for root device
[ +2.839850] systemd-fstab-generator[560]: Ignoring "noauto" for root device
[ +0.097646] systemd-fstab-generator[571]: Ignoring "noauto" for root device
[ +0.125241] systemd-fstab-generator[584]: Ignoring "noauto" for root device
[ +0.103923] systemd-fstab-generator[595]: Ignoring "noauto" for root device
[ +0.235112] systemd-fstab-generator[623]: Ignoring "noauto" for root device
[Mar21 22:34] systemd-fstab-generator[817]: Ignoring "noauto" for root device
[Mar21 22:35] kauditd_printk_skb: 7 callbacks suppressed
[Mar21 22:36] kauditd_printk_skb: 8 callbacks suppressed
*
* ==> etcd [a78d6bfd8f6b70c4cef58d78d32f23bd19cf582a21afe03737e4eb8782330c4e] <==
* {"level":"info","ts":"2023-03-21T22:34:39.960Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"245a8df1c58de0e1","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-03-21T22:34:39.961Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-03-21T22:34:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 switched to configuration voters=(2619562202810409185)"}
{"level":"info","ts":"2023-03-21T22:34:39.962Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","added-peer-id":"245a8df1c58de0e1","added-peer-peer-urls":["https://192.168.39.129:2380"]}
{"level":"info","ts":"2023-03-21T22:34:39.962Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-21T22:34:39.962Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-21T22:34:39.964Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-21T22:34:39.964Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-21T22:34:39.965Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-21T22:34:39.965Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.129:2380"}
{"level":"info","ts":"2023-03-21T22:34:39.965Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.129:2380"}
{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 is starting a new election at term 2"}
{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became pre-candidate at term 2"}
{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgPreVoteResp from 245a8df1c58de0e1 at term 2"}
{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became candidate at term 3"}
{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgVoteResp from 245a8df1c58de0e1 at term 3"}
{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became leader at term 3"}
{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 245a8df1c58de0e1 elected leader 245a8df1c58de0e1 at term 3"}
{"level":"info","ts":"2023-03-21T22:34:41.045Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"245a8df1c58de0e1","local-member-attributes":"{Name:test-preload-778713 ClientURLs:[https://192.168.39.129:2379]}","request-path":"/0/members/245a8df1c58de0e1/attributes","cluster-id":"a2af9788ad7a361f","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-21T22:34:41.046Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-21T22:34:41.047Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.129:2379"}
{"level":"info","ts":"2023-03-21T22:34:41.047Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-21T22:34:41.048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-21T22:34:41.048Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-21T22:34:41.049Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 22:36:14 up 2 min, 0 users, load average: 0.25, 0.12, 0.04
Linux test-preload-778713 5.10.57 #1 SMP Fri Mar 17 22:07:25 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [ec92b2c00d9b294ce2548a83156bfd4288871592390c53319264be24089b8547] <==
* I0321 22:34:16.333267 1 server.go:558] external host was not specified, using 192.168.39.129
I0321 22:34:16.333999 1 server.go:158] Version: v1.24.4
I0321 22:34:16.334046 1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0321 22:34:16.574237 1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
I0321 22:34:16.575299 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0321 22:34:16.575311 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0321 22:34:16.576565 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0321 22:34:16.576580 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0321 22:34:16.579901 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:17.575339 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:17.580402 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:18.575954 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:19.354467 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:20.140720 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:22.356961 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:23.013256 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:25.725917 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:27.067125 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:31.716202 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0321 22:34:34.692458 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
E0321 22:34:36.580548 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [f7f98bc5b364e8412b5e83481e4b5d55058cffbe702f2006c5ccf0c57069baad] <==
* I0321 22:35:01.112353 1 controller.go:85] Starting OpenAPI V3 controller
I0321 22:35:01.112368 1 naming_controller.go:291] Starting NamingConditionController
I0321 22:35:01.112518 1 establishing_controller.go:76] Starting EstablishingController
I0321 22:35:01.112525 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0321 22:35:01.112530 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0321 22:35:01.112538 1 crd_finalizer.go:266] Starting CRDFinalizer
I0321 22:35:01.080856 1 controller.go:80] Starting OpenAPI V3 AggregationController
I0321 22:35:01.167498 1 shared_informer.go:262] Caches are synced for node_authorizer
I0321 22:35:01.173613 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0321 22:35:01.178923 1 cache.go:39] Caches are synced for autoregister controller
I0321 22:35:01.180585 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0321 22:35:01.181121 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0321 22:35:01.181384 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0321 22:35:01.202829 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0321 22:35:01.221649 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0321 22:35:01.730404 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0321 22:35:02.087960 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0321 22:35:02.697688 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0321 22:35:02.706971 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0321 22:35:02.742116 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0321 22:35:02.764510 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0321 22:35:02.772333 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0321 22:35:20.110317 1 controller.go:611] quota admission added evaluator for: endpoints
I0321 22:35:20.113261 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0321 22:35:35.860864 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-controller-manager [a194d126ab9a4284f87c9770ea07108020456ecce39d1fbaca418d27f321c62d] <==
* I0321 22:35:13.962711 1 shared_informer.go:262] Caches are synced for persistent volume
I0321 22:35:13.962666 1 shared_informer.go:262] Caches are synced for HPA
I0321 22:35:13.964984 1 shared_informer.go:262] Caches are synced for cronjob
I0321 22:35:13.971265 1 shared_informer.go:262] Caches are synced for attach detach
I0321 22:35:13.991478 1 shared_informer.go:262] Caches are synced for taint
I0321 22:35:13.991710 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0321 22:35:13.991936 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
W0321 22:35:13.992217 1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-778713. Assuming now as a timestamp.
I0321 22:35:13.992277 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0321 22:35:13.992830 1 event.go:294] "Event occurred" object="test-preload-778713" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-778713 event: Registered Node test-preload-778713 in Controller"
I0321 22:35:14.001672 1 shared_informer.go:262] Caches are synced for TTL
I0321 22:35:14.002965 1 shared_informer.go:262] Caches are synced for node
I0321 22:35:14.003101 1 range_allocator.go:173] Starting range CIDR allocator
I0321 22:35:14.003305 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0321 22:35:14.003321 1 shared_informer.go:262] Caches are synced for cidrallocator
I0321 22:35:14.097198 1 shared_informer.go:262] Caches are synced for namespace
I0321 22:35:14.103920 1 shared_informer.go:262] Caches are synced for resource quota
I0321 22:35:14.107323 1 shared_informer.go:262] Caches are synced for service account
I0321 22:35:14.113713 1 shared_informer.go:262] Caches are synced for stateful set
I0321 22:35:14.155177 1 shared_informer.go:262] Caches are synced for disruption
I0321 22:35:14.155210 1 disruption.go:371] Sending events to api server.
I0321 22:35:14.161801 1 shared_informer.go:262] Caches are synced for resource quota
I0321 22:35:14.607471 1 shared_informer.go:262] Caches are synced for garbage collector
I0321 22:35:14.612841 1 shared_informer.go:262] Caches are synced for garbage collector
I0321 22:35:14.612937 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [e44bf4ae4d83323c8294d825664087f954c343c35d2b33be081be33a5efbbea5] <==
* vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x2f6
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run.func1()
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:165 +0x3c
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x3931a60?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x4d010e0, 0xc000748a20}, 0x1, 0xc000102360)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0xdf8475800, 0x0, 0xa0?, 0xc00006a7d0?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x4d2abb0?, 0xc000622980?, 0xc00078b860?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:164 +0x372
goroutine 144 [syscall]:
syscall.Syscall6(0xe8, 0xd, 0xc000a8fc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
/usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5
k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0x99b17e83b43979f2?, {0xc000a8fc14?, 0xab082cace494b7fc?, 0x5a594ffa9574f6ca?}, 0xc52e66829fa87e8b?)
vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:56 +0x58
k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc00065f3e0)
vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d
k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0000b6730)
vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x26e
created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c5
*
* ==> kube-proxy [0f12728de096e667d674a130e2a3f8da71dd487711f3760da7c4fc971840a61e] <==
* I0321 22:35:35.776316 1 node.go:163] Successfully retrieved node IP: 192.168.39.129
I0321 22:35:35.776700 1 server_others.go:138] "Detected node IP" address="192.168.39.129"
I0321 22:35:35.777013 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0321 22:35:35.843632 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0321 22:35:35.843675 1 server_others.go:206] "Using iptables Proxier"
I0321 22:35:35.843699 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0321 22:35:35.844363 1 server.go:661] "Version info" version="v1.24.4"
I0321 22:35:35.844398 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0321 22:35:35.847666 1 config.go:444] "Starting node config controller"
I0321 22:35:35.847705 1 shared_informer.go:255] Waiting for caches to sync for node config
I0321 22:35:35.848574 1 config.go:317] "Starting service config controller"
I0321 22:35:35.848635 1 shared_informer.go:255] Waiting for caches to sync for service config
I0321 22:35:35.850814 1 config.go:226] "Starting endpoint slice config controller"
I0321 22:35:35.850853 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0321 22:35:35.949306 1 shared_informer.go:262] Caches are synced for node config
I0321 22:35:35.950485 1 shared_informer.go:262] Caches are synced for service config
I0321 22:35:35.951681 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [c8312b60e7fceab9785451247e3cbf4e2e56d9b90b7debd653ddb6dbb7804226] <==
* W0321 22:34:53.008339 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.39.129:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:53.008376 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.129:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:54.064408 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.129:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:54.064435 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.129:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:54.334229 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:54.334259 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:54.663596 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.39.129:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:54.663635 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.129:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:55.127563 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.39.129:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:55.127622 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.129:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:56.073617 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.39.129:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:56.073679 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.129:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:56.166996 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.39.129:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:56.167019 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.129:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:56.926152 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.129:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:56.926185 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.129:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:57.178450 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:57.178489 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:57.217895 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:57.217951 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:34:57.842092 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.39.129:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
E0321 22:34:57.842148 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.129:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
W0321 22:35:01.130168 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0321 22:35:01.130222 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
I0321 22:35:18.505084 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Tue 2023-03-21 22:33:26 UTC, ends at Tue 2023-03-21 22:36:15 UTC. --
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.496843 823 topology_manager.go:200] "Topology Admit Handler"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.497188 823 topology_manager.go:200] "Topology Admit Handler"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.497386 823 topology_manager.go:200] "Topology Admit Handler"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605070 823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42f3e5be-8516-465e-8d63-949a1de4a66d-kube-proxy\") pod \"kube-proxy-vdrfz\" (UID: \"42f3e5be-8516-465e-8d63-949a1de4a66d\") " pod="kube-system/kube-proxy-vdrfz"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605206 823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42f3e5be-8516-465e-8d63-949a1de4a66d-xtables-lock\") pod \"kube-proxy-vdrfz\" (UID: \"42f3e5be-8516-465e-8d63-949a1de4a66d\") " pod="kube-system/kube-proxy-vdrfz"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605256 823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42f3e5be-8516-465e-8d63-949a1de4a66d-lib-modules\") pod \"kube-proxy-vdrfz\" (UID: \"42f3e5be-8516-465e-8d63-949a1de4a66d\") " pod="kube-system/kube-proxy-vdrfz"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605359 823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdq4\" (UniqueName: \"kubernetes.io/projected/9ba80daf-32d4-41a3-a1bd-7c8b3168a4db-kube-api-access-gtdq4\") pod \"coredns-6d4b75cb6d-4zkrg\" (UID: \"9ba80daf-32d4-41a3-a1bd-7c8b3168a4db\") " pod="kube-system/coredns-6d4b75cb6d-4zkrg"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605427 823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnx4w\" (UniqueName: \"kubernetes.io/projected/42f3e5be-8516-465e-8d63-949a1de4a66d-kube-api-access-hnx4w\") pod \"kube-proxy-vdrfz\" (UID: \"42f3e5be-8516-465e-8d63-949a1de4a66d\") " pod="kube-system/kube-proxy-vdrfz"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605452 823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba80daf-32d4-41a3-a1bd-7c8b3168a4db-config-volume\") pod \"coredns-6d4b75cb6d-4zkrg\" (UID: \"9ba80daf-32d4-41a3-a1bd-7c8b3168a4db\") " pod="kube-system/coredns-6d4b75cb6d-4zkrg"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605470 823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15af5481-be73-4e4b-8d93-f78926fa2edf-tmp\") pod \"storage-provisioner\" (UID: \"15af5481-be73-4e4b-8d93-f78926fa2edf\") " pod="kube-system/storage-provisioner"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605496 823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsj8\" (UniqueName: \"kubernetes.io/projected/15af5481-be73-4e4b-8d93-f78926fa2edf-kube-api-access-5xsj8\") pod \"storage-provisioner\" (UID: \"15af5481-be73-4e4b-8d93-f78926fa2edf\") " pod="kube-system/storage-provisioner"
Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605516 823 reconciler.go:159] "Reconciler: start to sync state"
Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.564013 823 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5xsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(15af5481-be73-4e4b-8d93-
f78926fa2edf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.564052 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID=15af5481-be73-4e4b-8d93-f78926fa2edf
Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.776190 823 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5xsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(15af5481-be73-4e4b-8d93-
f78926fa2edf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.776223 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID=15af5481-be73-4e4b-8d93-f78926fa2edf
Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.853641 823 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-a
ccess-hnx4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vdrfz_kube-system(42f3e5be-8516-465e-8d63-949a1de4a66d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.853676 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-vdrfz" podUID=42f3e5be-8516-465e-8d63-949a1de4a66d
Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.906178 823 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gtdq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-4zkrg_kube-system(9ba80daf-32d4-41a3-a1bd-7c8b3168a4db): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.906220 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6d4b75cb6d-4zkrg" podUID=9ba80daf-32d4-41a3-a1bd-7c8b3168a4db
Mar 21 22:35:20 test-preload-778713 kubelet[823]: E0321 22:35:20.780711 823 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-a
ccess-hnx4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vdrfz_kube-system(42f3e5be-8516-465e-8d63-949a1de4a66d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 21 22:35:20 test-preload-778713 kubelet[823]: E0321 22:35:20.780839 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-vdrfz" podUID=42f3e5be-8516-465e-8d63-949a1de4a66d
Mar 21 22:35:20 test-preload-778713 kubelet[823]: E0321 22:35:20.782930 823 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gtdq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-4zkrg_kube-system(9ba80daf-32d4-41a3-a1bd-7c8b3168a4db): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 21 22:35:20 test-preload-778713 kubelet[823]: E0321 22:35:20.783166 823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6d4b75cb6d-4zkrg" podUID=9ba80daf-32d4-41a3-a1bd-7c8b3168a4db
Mar 21 22:36:00 test-preload-778713 kubelet[823]: I0321 22:36:00.887239 823 scope.go:110] "RemoveContainer" containerID="2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380"
*
* ==> storage-provisioner [2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380] <==
* I0321 22:35:30.727052 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0321 22:36:00.732949 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
*
* ==> storage-provisioner [df1913d11540e787c62cdc0bbf163830e6e55ce53408e6dfd9a8e30ff343be7f] <==
* I0321 22:36:01.069970 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0321 22:36:01.103329 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0321 22:36:01.103417 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-778713 -n test-preload-778713
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-778713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-778713" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-778713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-778713: (1.171112539s)
--- FAIL: TestPreload (392.90s)