=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-872855 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0128 19:02:01.332486 11004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/functional-402552/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-872855 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (1m20.643928551s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-872855 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-872855 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.213558013s)
preload_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p test-preload-872855
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-872855: (7.110381075s)
preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-872855 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd
E0128 19:03:40.226413 11004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/addons-493500/client.crt: no such file or directory
E0128 19:04:53.908842 11004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/ingress-addon-legacy-859887/client.crt: no such file or directory
E0128 19:05:37.180640 11004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/addons-493500/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-872855 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd: (3m37.10839745s)
preload_test.go:80: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-872855 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got
-- stdout --
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20220726-ed811e41 d921cee849482 25.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 9.06MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7a 13.6MB
k8s.gcr.io/etcd 3.5.3-0 aebe758cef4cd 102MB
k8s.gcr.io/kube-apiserver v1.24.4 6cab9d1bed1be 33.8MB
k8s.gcr.io/kube-controller-manager v1.24.4 1f99cb6da9a82 31MB
k8s.gcr.io/kube-proxy v1.24.4 7a53d1e08ef58 39.5MB
k8s.gcr.io/kube-scheduler v1.24.4 03fa22539fc1c 15.5MB
k8s.gcr.io/pause 3.7 221177c6082a8 311kB
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-01-28 19:06:49.484197793 +0000 UTC m=+2677.480339051
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-872855 -n test-preload-872855
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-872855 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-872855 logs -n 25: (1.058253144s)
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-489040 ssh -n | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:44 UTC | 28 Jan 23 18:44 UTC |
| | multinode-489040-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-489040 ssh -n multinode-489040 sudo cat | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:44 UTC | 28 Jan 23 18:44 UTC |
| | /home/docker/cp-test_multinode-489040-m03_multinode-489040.txt | | | | | |
| cp | multinode-489040 cp multinode-489040-m03:/home/docker/cp-test.txt | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:44 UTC | 28 Jan 23 18:44 UTC |
| | multinode-489040-m02:/home/docker/cp-test_multinode-489040-m03_multinode-489040-m02.txt | | | | | |
| ssh | multinode-489040 ssh -n | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:44 UTC | 28 Jan 23 18:44 UTC |
| | multinode-489040-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-489040 ssh -n multinode-489040-m02 sudo cat | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:44 UTC | 28 Jan 23 18:44 UTC |
| | /home/docker/cp-test_multinode-489040-m03_multinode-489040-m02.txt | | | | | |
| node | multinode-489040 node stop m03 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:44 UTC | 28 Jan 23 18:44 UTC |
| node | multinode-489040 node start | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:44 UTC | 28 Jan 23 18:45 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-489040 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:45 UTC | |
| stop | -p multinode-489040 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:45 UTC | 28 Jan 23 18:48 UTC |
| start | -p multinode-489040 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:48 UTC | 28 Jan 23 18:53 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-489040 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:53 UTC | |
| node | multinode-489040 node delete | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:53 UTC | 28 Jan 23 18:53 UTC |
| | m03 | | | | | |
| stop | multinode-489040 stop | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:53 UTC | 28 Jan 23 18:56 UTC |
| start | -p multinode-489040 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 18:56 UTC | 28 Jan 23 19:00 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-489040 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 19:00 UTC | |
| start | -p multinode-489040-m02 | multinode-489040-m02 | jenkins | v1.29.0 | 28 Jan 23 19:00 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-489040-m03 | multinode-489040-m03 | jenkins | v1.29.0 | 28 Jan 23 19:00 UTC | 28 Jan 23 19:01 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-489040 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 19:01 UTC | |
| delete | -p multinode-489040-m03 | multinode-489040-m03 | jenkins | v1.29.0 | 28 Jan 23 19:01 UTC | 28 Jan 23 19:01 UTC |
| delete | -p multinode-489040 | multinode-489040 | jenkins | v1.29.0 | 28 Jan 23 19:01 UTC | 28 Jan 23 19:01 UTC |
| start | -p test-preload-872855 | test-preload-872855 | jenkins | v1.29.0 | 28 Jan 23 19:01 UTC | 28 Jan 23 19:03 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-872855 | test-preload-872855 | jenkins | v1.29.0 | 28 Jan 23 19:03 UTC | 28 Jan 23 19:03 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| stop | -p test-preload-872855 | test-preload-872855 | jenkins | v1.29.0 | 28 Jan 23 19:03 UTC | 28 Jan 23 19:03 UTC |
| start | -p test-preload-872855 | test-preload-872855 | jenkins | v1.29.0 | 28 Jan 23 19:03 UTC | 28 Jan 23 19:06 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p test-preload-872855 -- sudo | test-preload-872855 | jenkins | v1.29.0 | 28 Jan 23 19:06 UTC | 28 Jan 23 19:06 UTC |
| | crictl image ls | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/28 19:03:12
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.19.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0128 19:03:12.193800 24158 out.go:296] Setting OutFile to fd 1 ...
I0128 19:03:12.193884 24158 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 19:03:12.193892 24158 out.go:309] Setting ErrFile to fd 2...
I0128 19:03:12.193896 24158 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 19:03:12.193991 24158 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3428/.minikube/bin
I0128 19:03:12.194481 24158 out.go:303] Setting JSON to false
I0128 19:03:12.195262 24158 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2740,"bootTime":1674929852,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0128 19:03:12.195315 24158 start.go:135] virtualization: kvm guest
I0128 19:03:12.197697 24158 out.go:177] * [test-preload-872855] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0128 19:03:12.199027 24158 notify.go:220] Checking for updates...
I0128 19:03:12.200320 24158 out.go:177] - MINIKUBE_LOCATION=15565
I0128 19:03:12.201651 24158 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0128 19:03:12.202940 24158 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15565-3428/kubeconfig
I0128 19:03:12.204343 24158 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3428/.minikube
I0128 19:03:12.205624 24158 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0128 19:03:12.206903 24158 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0128 19:03:12.208369 24158 config.go:180] Loaded profile config "test-preload-872855": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0128 19:03:12.208739 24158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0128 19:03:12.208800 24158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0128 19:03:12.224208 24158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
I0128 19:03:12.224554 24158 main.go:141] libmachine: () Calling .GetVersion
I0128 19:03:12.224976 24158 main.go:141] libmachine: Using API Version 1
I0128 19:03:12.224999 24158 main.go:141] libmachine: () Calling .SetConfigRaw
I0128 19:03:12.225378 24158 main.go:141] libmachine: () Calling .GetMachineName
I0128 19:03:12.225546 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:12.227242 24158 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
I0128 19:03:12.228371 24158 driver.go:365] Setting default libvirt URI to qemu:///system
I0128 19:03:12.228707 24158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0128 19:03:12.228736 24158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0128 19:03:12.242284 24158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
I0128 19:03:12.242625 24158 main.go:141] libmachine: () Calling .GetVersion
I0128 19:03:12.243021 24158 main.go:141] libmachine: Using API Version 1
I0128 19:03:12.243040 24158 main.go:141] libmachine: () Calling .SetConfigRaw
I0128 19:03:12.243284 24158 main.go:141] libmachine: () Calling .GetMachineName
I0128 19:03:12.243462 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:12.274707 24158 out.go:177] * Using the kvm2 driver based on existing profile
I0128 19:03:12.275901 24158 start.go:296] selected driver: kvm2
I0128 19:03:12.275915 24158 start.go:857] validating driver "kvm2" against &{Name:test-preload-872855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.29.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload
-872855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 19:03:12.276002 24158 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0128 19:03:12.276560 24158 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0128 19:03:12.276620 24158 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15565-3428/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0128 19:03:12.289773 24158 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0128 19:03:12.290024 24158 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0128 19:03:12.290053 24158 cni.go:84] Creating CNI manager for ""
I0128 19:03:12.290061 24158 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0128 19:03:12.290075 24158 start_flags.go:319] config:
{Name:test-preload-872855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.29.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-872855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 19:03:12.290183 24158 iso.go:125] acquiring lock: {Name:mkc56155f49acb8f05e1dda081aa2941f89df9ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0128 19:03:12.292514 24158 out.go:177] * Starting control plane node test-preload-872855 in cluster test-preload-872855
I0128 19:03:12.293637 24158 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0128 19:03:12.901662 24158 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0128 19:03:12.901701 24158 cache.go:57] Caching tarball of preloaded images
I0128 19:03:12.901839 24158 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0128 19:03:12.903695 24158 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0128 19:03:12.904994 24158 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0128 19:03:13.060588 24158 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15565-3428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0128 19:03:30.523168 24158 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0128 19:03:30.523253 24158 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15565-3428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0128 19:03:31.383338 24158 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0128 19:03:31.383458 24158 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/config.json ...
I0128 19:03:31.383668 24158 cache.go:193] Successfully downloaded all kic artifacts
I0128 19:03:31.383695 24158 start.go:364] acquiring machines lock for test-preload-872855: {Name:mk2a86355da0982c984360428a5c5ec0deabd8a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0128 19:03:31.383746 24158 start.go:368] acquired machines lock for "test-preload-872855" in 36.536µs
I0128 19:03:31.383761 24158 start.go:96] Skipping create...Using existing machine configuration
I0128 19:03:31.383766 24158 fix.go:55] fixHost starting:
I0128 19:03:31.384010 24158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0128 19:03:31.384047 24158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0128 19:03:31.398528 24158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
I0128 19:03:31.398918 24158 main.go:141] libmachine: () Calling .GetVersion
I0128 19:03:31.399351 24158 main.go:141] libmachine: Using API Version 1
I0128 19:03:31.399374 24158 main.go:141] libmachine: () Calling .SetConfigRaw
I0128 19:03:31.399779 24158 main.go:141] libmachine: () Calling .GetMachineName
I0128 19:03:31.399943 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:31.400093 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetState
I0128 19:03:31.401633 24158 fix.go:103] recreateIfNeeded on test-preload-872855: state=Stopped err=<nil>
I0128 19:03:31.401651 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
W0128 19:03:31.401797 24158 fix.go:129] unexpected machine state, will restart: <nil>
I0128 19:03:31.403950 24158 out.go:177] * Restarting existing kvm2 VM for "test-preload-872855" ...
I0128 19:03:31.405347 24158 main.go:141] libmachine: (test-preload-872855) Calling .Start
I0128 19:03:31.405477 24158 main.go:141] libmachine: (test-preload-872855) Ensuring networks are active...
I0128 19:03:31.406126 24158 main.go:141] libmachine: (test-preload-872855) Ensuring network default is active
I0128 19:03:31.406423 24158 main.go:141] libmachine: (test-preload-872855) Ensuring network mk-test-preload-872855 is active
I0128 19:03:31.406707 24158 main.go:141] libmachine: (test-preload-872855) Getting domain xml...
I0128 19:03:31.407290 24158 main.go:141] libmachine: (test-preload-872855) Creating domain...
I0128 19:03:32.590654 24158 main.go:141] libmachine: (test-preload-872855) Waiting to get IP...
I0128 19:03:32.591424 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:32.591821 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:32.591866 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:32.591807 24197 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
I0128 19:03:32.856130 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:32.856482 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:32.856512 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:32.856438 24197 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
I0128 19:03:33.238943 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:33.239297 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:33.239357 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:33.239278 24197 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
I0128 19:03:33.663821 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:33.664241 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:33.664275 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:33.664175 24197 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
I0128 19:03:34.138370 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:34.138831 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:34.138862 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:34.138778 24197 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
I0128 19:03:34.727154 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:34.727469 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:34.727510 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:34.727415 24197 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
I0128 19:03:35.563320 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:35.563688 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:35.563710 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:35.563655 24197 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
I0128 19:03:36.311454 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:36.311746 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:36.311767 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:36.311712 24197 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
I0128 19:03:37.300105 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:37.300527 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:37.300552 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:37.300498 24197 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
I0128 19:03:38.491731 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:38.492174 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:38.492205 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:38.492134 24197 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
I0128 19:03:40.171610 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:40.172080 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:40.172111 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:40.172016 24197 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
I0128 19:03:42.519106 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:42.519549 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:42.519569 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:42.519501 24197 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
I0128 19:03:45.889857 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:45.890255 24158 main.go:141] libmachine: (test-preload-872855) DBG | unable to find current IP address of domain test-preload-872855 in network mk-test-preload-872855
I0128 19:03:45.890291 24158 main.go:141] libmachine: (test-preload-872855) DBG | I0128 19:03:45.890197 24197 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
I0128 19:03:49.010125 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.010513 24158 main.go:141] libmachine: (test-preload-872855) Found IP for machine: 192.168.39.121
I0128 19:03:49.010536 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has current primary IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.010550 24158 main.go:141] libmachine: (test-preload-872855) Reserving static IP address...
I0128 19:03:49.010829 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "test-preload-872855", mac: "52:54:00:dc:86:d7", ip: "192.168.39.121"} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.010851 24158 main.go:141] libmachine: (test-preload-872855) DBG | skip adding static IP to network mk-test-preload-872855 - found existing host DHCP lease matching {name: "test-preload-872855", mac: "52:54:00:dc:86:d7", ip: "192.168.39.121"}
I0128 19:03:49.010858 24158 main.go:141] libmachine: (test-preload-872855) Reserved static IP address: 192.168.39.121
I0128 19:03:49.010871 24158 main.go:141] libmachine: (test-preload-872855) Waiting for SSH to be available...
I0128 19:03:49.010880 24158 main.go:141] libmachine: (test-preload-872855) DBG | Getting to WaitForSSH function...
I0128 19:03:49.012996 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.013270 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.013308 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.013437 24158 main.go:141] libmachine: (test-preload-872855) DBG | Using SSH client type: external
I0128 19:03:49.013473 24158 main.go:141] libmachine: (test-preload-872855) DBG | Using SSH private key: /home/jenkins/minikube-integration/15565-3428/.minikube/machines/test-preload-872855/id_rsa (-rw-------)
I0128 19:03:49.013498 24158 main.go:141] libmachine: (test-preload-872855) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15565-3428/.minikube/machines/test-preload-872855/id_rsa -p 22] /usr/bin/ssh <nil>}
I0128 19:03:49.013517 24158 main.go:141] libmachine: (test-preload-872855) DBG | About to run SSH command:
I0128 19:03:49.013534 24158 main.go:141] libmachine: (test-preload-872855) DBG | exit 0
I0128 19:03:49.101185 24158 main.go:141] libmachine: (test-preload-872855) DBG | SSH cmd err, output: <nil>:
I0128 19:03:49.101497 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetConfigRaw
I0128 19:03:49.102070 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetIP
I0128 19:03:49.104066 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.104352 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.104382 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.104622 24158 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/config.json ...
I0128 19:03:49.104788 24158 machine.go:88] provisioning docker machine ...
I0128 19:03:49.104805 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:49.104990 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetMachineName
I0128 19:03:49.105136 24158 buildroot.go:166] provisioning hostname "test-preload-872855"
I0128 19:03:49.105152 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetMachineName
I0128 19:03:49.105294 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:03:49.107096 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.107444 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.107479 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.107580 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:03:49.107781 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:49.107908 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:49.108012 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:03:49.108160 24158 main.go:141] libmachine: Using SSH client type: native
I0128 19:03:49.108368 24158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 192.168.39.121 22 <nil> <nil>}
I0128 19:03:49.108389 24158 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-872855 && echo "test-preload-872855" | sudo tee /etc/hostname
I0128 19:03:49.237292 24158 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-872855
I0128 19:03:49.237312 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:03:49.239610 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.239934 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.239963 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.240085 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:03:49.240265 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:49.240441 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:49.240577 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:03:49.240742 24158 main.go:141] libmachine: Using SSH client type: native
I0128 19:03:49.240848 24158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 192.168.39.121 22 <nil> <nil>}
I0128 19:03:49.240869 24158 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-872855' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-872855/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-872855' | sudo tee -a /etc/hosts;
fi
fi
I0128 19:03:49.364682 24158 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0128 19:03:49.364707 24158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3428/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3428/.minikube}
I0128 19:03:49.364722 24158 buildroot.go:174] setting up certificates
I0128 19:03:49.364729 24158 provision.go:83] configureAuth start
I0128 19:03:49.364740 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetMachineName
I0128 19:03:49.364944 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetIP
I0128 19:03:49.367213 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.367504 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.367533 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.367654 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:03:49.369492 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.369812 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.369847 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.369958 24158 provision.go:138] copyHostCerts
I0128 19:03:49.370003 24158 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3428/.minikube/cert.pem, removing ...
I0128 19:03:49.370011 24158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3428/.minikube/cert.pem
I0128 19:03:49.370076 24158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3428/.minikube/cert.pem (1123 bytes)
I0128 19:03:49.370150 24158 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3428/.minikube/key.pem, removing ...
I0128 19:03:49.370176 24158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3428/.minikube/key.pem
I0128 19:03:49.370207 24158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3428/.minikube/key.pem (1679 bytes)
I0128 19:03:49.370257 24158 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3428/.minikube/ca.pem, removing ...
I0128 19:03:49.370264 24158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3428/.minikube/ca.pem
I0128 19:03:49.370287 24158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3428/.minikube/ca.pem (1082 bytes)
I0128 19:03:49.370369 24158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3428/.minikube/certs/ca-key.pem org=jenkins.test-preload-872855 san=[192.168.39.121 192.168.39.121 localhost 127.0.0.1 minikube test-preload-872855]
I0128 19:03:49.598209 24158 provision.go:172] copyRemoteCerts
I0128 19:03:49.598268 24158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0128 19:03:49.598298 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:03:49.600418 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.600694 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.600723 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.600815 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:03:49.600990 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:49.601117 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:03:49.601221 24158 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/machines/test-preload-872855/id_rsa Username:docker}
I0128 19:03:49.691307 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0128 19:03:49.712750 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0128 19:03:49.733577 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0128 19:03:49.754149 24158 provision.go:86] duration metric: configureAuth took 389.413072ms
I0128 19:03:49.754178 24158 buildroot.go:189] setting minikube options for container-runtime
I0128 19:03:49.754302 24158 config.go:180] Loaded profile config "test-preload-872855": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0128 19:03:49.754313 24158 machine.go:91] provisioned docker machine in 649.514402ms
I0128 19:03:49.754319 24158 start.go:300] post-start starting for "test-preload-872855" (driver="kvm2")
I0128 19:03:49.754324 24158 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0128 19:03:49.754344 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:49.754594 24158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0128 19:03:49.754646 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:03:49.756821 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.757111 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.757136 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.757310 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:03:49.757472 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:49.757657 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:03:49.757774 24158 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/machines/test-preload-872855/id_rsa Username:docker}
I0128 19:03:49.845766 24158 ssh_runner.go:195] Run: cat /etc/os-release
I0128 19:03:49.849369 24158 info.go:137] Remote host: Buildroot 2021.02.12
I0128 19:03:49.849387 24158 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3428/.minikube/addons for local assets ...
I0128 19:03:49.849431 24158 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3428/.minikube/files for local assets ...
I0128 19:03:49.849514 24158 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3428/.minikube/files/etc/ssl/certs/110042.pem -> 110042.pem in /etc/ssl/certs
I0128 19:03:49.849614 24158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0128 19:03:49.857067 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/files/etc/ssl/certs/110042.pem --> /etc/ssl/certs/110042.pem (1708 bytes)
I0128 19:03:49.878248 24158 start.go:303] post-start completed in 123.919962ms
I0128 19:03:49.878264 24158 fix.go:57] fixHost completed within 18.494497364s
I0128 19:03:49.878277 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:03:49.880478 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.880825 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:49.880852 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:49.880981 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:03:49.881145 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:49.881303 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:49.881452 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:03:49.881588 24158 main.go:141] libmachine: Using SSH client type: native
I0128 19:03:49.881707 24158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 192.168.39.121 22 <nil> <nil>}
I0128 19:03:49.881717 24158 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0128 19:03:50.002120 24158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674932629.951244368
I0128 19:03:50.002137 24158 fix.go:207] guest clock: 1674932629.951244368
I0128 19:03:50.002145 24158 fix.go:220] Guest: 2023-01-28 19:03:49.951244368 +0000 UTC Remote: 2023-01-28 19:03:49.878267244 +0000 UTC m=+37.745161104 (delta=72.977124ms)
I0128 19:03:50.002183 24158 fix.go:191] guest clock delta is within tolerance: 72.977124ms
I0128 19:03:50.002189 24158 start.go:83] releasing machines lock for "test-preload-872855", held for 18.618432808s
I0128 19:03:50.002213 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:50.002377 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetIP
I0128 19:03:50.004473 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:50.004757 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:50.004785 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:50.004905 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:50.005349 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:50.005509 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:03:50.005589 24158 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0128 19:03:50.005613 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:03:50.005695 24158 ssh_runner.go:195] Run: cat /version.json
I0128 19:03:50.005711 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:03:50.008074 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:50.008242 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:50.008370 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:50.008392 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:50.008510 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:03:50.008647 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:03:50.008658 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:50.008673 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:03:50.008813 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:03:50.008814 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:03:50.008974 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:03:50.008983 24158 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/machines/test-preload-872855/id_rsa Username:docker}
I0128 19:03:50.009111 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:03:50.009220 24158 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/machines/test-preload-872855/id_rsa Username:docker}
I0128 19:03:50.098124 24158 ssh_runner.go:195] Run: systemctl --version
I0128 19:03:50.229921 24158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0128 19:03:50.235280 24158 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0128 19:03:50.235333 24158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0128 19:03:50.253112 24158 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0128 19:03:50.253127 24158 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0128 19:03:50.253200 24158 ssh_runner.go:195] Run: sudo crictl images --output json
I0128 19:03:54.285898 24158 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.032665998s)
I0128 19:03:54.286025 24158 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0128 19:03:54.286078 24158 ssh_runner.go:195] Run: which lz4
I0128 19:03:54.290535 24158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0128 19:03:54.294992 24158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0128 19:03:54.295011 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0128 19:03:56.099857 24158 containerd.go:551] Took 1.809356 seconds to copy over tarball
I0128 19:03:56.099924 24158 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0128 19:03:59.260101 24158 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.1601526s)
I0128 19:03:59.260125 24158 containerd.go:558] Took 3.160245 seconds to extract the tarball
I0128 19:03:59.260134 24158 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0128 19:03:59.300335 24158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 19:03:59.395931 24158 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0128 19:03:59.413581 24158 start.go:483] detecting cgroup driver to use...
I0128 19:03:59.413670 24158 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0128 19:04:02.070684 24158 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.656989445s)
I0128 19:04:02.070755 24158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0128 19:04:02.082535 24158 docker.go:186] disabling cri-docker service (if available) ...
I0128 19:04:02.082584 24158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0128 19:04:02.094021 24158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0128 19:04:02.105287 24158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0128 19:04:02.199938 24158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0128 19:04:02.295035 24158 docker.go:202] disabling docker service ...
I0128 19:04:02.295096 24158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0128 19:04:02.307479 24158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0128 19:04:02.318933 24158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0128 19:04:02.418056 24158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0128 19:04:02.521370 24158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0128 19:04:02.533598 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 19:04:02.551116 24158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
I0128 19:04:02.559695 24158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0128 19:04:02.568210 24158 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0128 19:04:02.568239 24158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0128 19:04:02.576894 24158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 19:04:02.585271 24158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0128 19:04:02.594124 24158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 19:04:02.602813 24158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0128 19:04:02.611631 24158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0128 19:04:02.620304 24158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0128 19:04:02.627966 24158 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0128 19:04:02.628007 24158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0128 19:04:02.639825 24158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0128 19:04:02.648202 24158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 19:04:02.743056 24158 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0128 19:04:02.765410 24158 start.go:530] Will wait 60s for socket path /run/containerd/containerd.sock
I0128 19:04:02.765446 24158 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0128 19:04:02.770662 24158 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0128 19:04:03.875893 24158 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0128 19:04:03.881327 24158 start.go:551] Will wait 60s for crictl version
I0128 19:04:03.881373 24158 ssh_runner.go:195] Run: which crictl
I0128 19:04:03.885059 24158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0128 19:04:03.913406 24158 start.go:567] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.15
RuntimeApiVersion: v1alpha2
I0128 19:04:03.913471 24158 ssh_runner.go:195] Run: containerd --version
I0128 19:04:03.941920 24158 ssh_runner.go:195] Run: containerd --version
I0128 19:04:03.968323 24158 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.15 ...
I0128 19:04:03.969483 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetIP
I0128 19:04:03.971922 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:04:03.972249 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:04:03.972280 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:04:03.972449 24158 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0128 19:04:03.976203 24158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 19:04:03.988787 24158 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0128 19:04:03.988838 24158 ssh_runner.go:195] Run: sudo crictl images --output json
I0128 19:04:04.017156 24158 containerd.go:608] all images are preloaded for containerd runtime.
I0128 19:04:04.017176 24158 containerd.go:522] Images already preloaded, skipping extraction
I0128 19:04:04.017219 24158 ssh_runner.go:195] Run: sudo crictl images --output json
I0128 19:04:04.042294 24158 containerd.go:608] all images are preloaded for containerd runtime.
I0128 19:04:04.042309 24158 cache_images.go:84] Images are preloaded, skipping loading
I0128 19:04:04.042338 24158 ssh_runner.go:195] Run: sudo crictl info
I0128 19:04:04.070742 24158 cni.go:84] Creating CNI manager for ""
I0128 19:04:04.070761 24158 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0128 19:04:04.070771 24158 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0128 19:04:04.070783 24158 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-872855 NodeName:test-preload-872855 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0128 19:04:04.070891 24158 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.121
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-872855"
kubeletExtraArgs:
node-ip: 192.168.39.121
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0128 19:04:04.070966 24158 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-872855 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-872855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0128 19:04:04.070999 24158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0128 19:04:04.078938 24158 binaries.go:44] Found k8s binaries, skipping transfer
I0128 19:04:04.078986 24158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0128 19:04:04.086611 24158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
I0128 19:04:04.101693 24158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0128 19:04:04.116460 24158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
I0128 19:04:04.131433 24158 ssh_runner.go:195] Run: grep 192.168.39.121 control-plane.minikube.internal$ /etc/hosts
I0128 19:04:04.134905 24158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.121 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 19:04:04.146465 24158 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855 for IP: 192.168.39.121
I0128 19:04:04.146486 24158 certs.go:186] acquiring lock for shared ca certs: {Name:mka5ca6f05c65138c104880d6f1130f86b1c80f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 19:04:04.146590 24158 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3428/.minikube/ca.key
I0128 19:04:04.146625 24158 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3428/.minikube/proxy-client-ca.key
I0128 19:04:04.146684 24158 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/client.key
I0128 19:04:04.146739 24158 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/apiserver.key.3839b38a
I0128 19:04:04.146771 24158 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/proxy-client.key
I0128 19:04:04.146850 24158 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3428/.minikube/certs/home/jenkins/minikube-integration/15565-3428/.minikube/certs/11004.pem (1338 bytes)
W0128 19:04:04.146874 24158 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3428/.minikube/certs/home/jenkins/minikube-integration/15565-3428/.minikube/certs/11004_empty.pem, impossibly tiny 0 bytes
I0128 19:04:04.146888 24158 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3428/.minikube/certs/home/jenkins/minikube-integration/15565-3428/.minikube/certs/ca-key.pem (1679 bytes)
I0128 19:04:04.146915 24158 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3428/.minikube/certs/home/jenkins/minikube-integration/15565-3428/.minikube/certs/ca.pem (1082 bytes)
I0128 19:04:04.146935 24158 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3428/.minikube/certs/home/jenkins/minikube-integration/15565-3428/.minikube/certs/cert.pem (1123 bytes)
I0128 19:04:04.146961 24158 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3428/.minikube/certs/home/jenkins/minikube-integration/15565-3428/.minikube/certs/key.pem (1679 bytes)
I0128 19:04:04.147001 24158 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3428/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3428/.minikube/files/etc/ssl/certs/110042.pem (1708 bytes)
I0128 19:04:04.147595 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0128 19:04:04.169786 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0128 19:04:04.191660 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0128 19:04:04.213322 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0128 19:04:04.235047 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0128 19:04:04.256696 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0128 19:04:04.278175 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0128 19:04:04.299663 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0128 19:04:04.321410 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0128 19:04:04.342933 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/certs/11004.pem --> /usr/share/ca-certificates/11004.pem (1338 bytes)
I0128 19:04:04.364418 24158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3428/.minikube/files/etc/ssl/certs/110042.pem --> /usr/share/ca-certificates/110042.pem (1708 bytes)
I0128 19:04:04.386236 24158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0128 19:04:04.401385 24158 ssh_runner.go:195] Run: openssl version
I0128 19:04:04.406801 24158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0128 19:04:04.415948 24158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0128 19:04:04.420244 24158 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:23 /usr/share/ca-certificates/minikubeCA.pem
I0128 19:04:04.420296 24158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0128 19:04:04.425738 24158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0128 19:04:04.434391 24158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11004.pem && ln -fs /usr/share/ca-certificates/11004.pem /etc/ssl/certs/11004.pem"
I0128 19:04:04.443095 24158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11004.pem
I0128 19:04:04.447400 24158 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:29 /usr/share/ca-certificates/11004.pem
I0128 19:04:04.447426 24158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11004.pem
I0128 19:04:04.452819 24158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11004.pem /etc/ssl/certs/51391683.0"
I0128 19:04:04.461908 24158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110042.pem && ln -fs /usr/share/ca-certificates/110042.pem /etc/ssl/certs/110042.pem"
I0128 19:04:04.470977 24158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110042.pem
I0128 19:04:04.475311 24158 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:29 /usr/share/ca-certificates/110042.pem
I0128 19:04:04.475345 24158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110042.pem
I0128 19:04:04.480760 24158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110042.pem /etc/ssl/certs/3ec20f2e.0"
I0128 19:04:04.489456 24158 kubeadm.go:401] StartCluster: {Name:test-preload-872855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.29.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-872855 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 19:04:04.489526 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0128 19:04:04.489563 24158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0128 19:04:04.520435 24158 cri.go:87] found id: ""
I0128 19:04:04.520481 24158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0128 19:04:04.528252 24158 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0128 19:04:04.528264 24158 kubeadm.go:633] restartCluster start
I0128 19:04:04.528291 24158 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0128 19:04:04.536015 24158 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0128 19:04:04.536473 24158 kubeconfig.go:135] verify returned: extract IP: "test-preload-872855" does not appear in /home/jenkins/minikube-integration/15565-3428/kubeconfig
I0128 19:04:04.536597 24158 kubeconfig.go:146] "test-preload-872855" context is missing from /home/jenkins/minikube-integration/15565-3428/kubeconfig - will repair!
I0128 19:04:04.536960 24158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3428/kubeconfig: {Name:mk68c0cdc51b4c3db12c527d0a0aac1339ffa973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 19:04:04.537536 24158 kapi.go:59] client config for test-preload-872855: &rest.Config{Host:"https://192.168.39.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0128 19:04:04.538145 24158 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0128 19:04:04.545828 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:04.545875 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:04.555660 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:05.056368 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:05.056432 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:05.067025 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:05.556646 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:05.556739 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:05.567225 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:06.055755 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:06.055834 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:06.066431 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:06.556407 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:06.556457 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:06.567353 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:07.055904 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:07.055973 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:07.066396 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:07.556770 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:07.556858 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:07.567589 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:08.056076 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:08.056131 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:08.066533 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:08.556072 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:08.556139 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:08.566630 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:09.056142 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:09.056226 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:09.066406 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:09.555943 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:09.556021 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:09.567536 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:10.056052 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:10.056109 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:10.066580 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:10.556121 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:10.556201 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:10.566682 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:11.056347 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:11.056413 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:11.066843 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:11.555729 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:11.555795 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:11.566564 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:12.056073 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:12.056158 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:12.066874 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:12.556536 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:12.556608 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:12.567463 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:13.056158 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:13.056215 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:13.066822 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:13.556425 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:13.556498 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:13.567018 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:14.056638 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:14.056717 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:14.066843 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:14.556567 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:14.556621 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:14.567321 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:14.567338 24158 api_server.go:165] Checking apiserver status ...
I0128 19:04:14.567392 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0128 19:04:14.576769 24158 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0128 19:04:14.576790 24158 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0128 19:04:14.576795 24158 kubeadm.go:1120] stopping kube-system containers ...
I0128 19:04:14.576804 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0128 19:04:14.576843 24158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0128 19:04:14.608112 24158 cri.go:87] found id: ""
I0128 19:04:14.608164 24158 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0128 19:04:14.621697 24158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0128 19:04:14.629677 24158 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0128 19:04:14.629722 24158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0128 19:04:14.637674 24158 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0128 19:04:14.637686 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0128 19:04:14.730578 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0128 19:04:15.158725 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0128 19:04:15.479284 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0128 19:04:15.539378 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0128 19:04:15.611899 24158 api_server.go:51] waiting for apiserver process to appear ...
I0128 19:04:15.611963 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0128 19:04:16.125917 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0128 19:04:16.625882 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0128 19:04:17.126055 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0128 19:04:17.137734 24158 api_server.go:71] duration metric: took 1.525836821s to wait for apiserver process to appear ...
I0128 19:04:17.137761 24158 api_server.go:87] waiting for apiserver healthz status ...
I0128 19:04:17.137772 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:22.138341 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0128 19:04:22.638877 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:27.639383 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0128 19:04:28.138555 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:33.139772 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0128 19:04:33.639489 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:37.247119 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": read tcp 192.168.39.1:48332->192.168.39.121:8443: read: connection reset by peer
I0128 19:04:37.639075 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:37.639661 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:38.139253 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:38.139803 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:38.639416 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:38.639991 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:39.139274 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:39.139839 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:39.639298 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:39.639887 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:40.139290 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:40.139874 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:40.639488 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:40.640202 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:41.139323 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:41.139916 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:41.638886 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:41.639418 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:42.138826 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:42.139341 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:42.639018 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:42.639527 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:43.139367 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:43.139916 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:43.638490 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:43.639071 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:44.139310 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:44.139903 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:44.639466 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:44.640039 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:45.138565 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:45.139100 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:45.638632 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:45.639148 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:46.138707 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:46.139225 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:46.639213 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:46.639820 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:47.139415 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:47.139982 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:47.639496 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:47.640024 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:48.138542 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:48.139103 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:48.638672 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:48.639173 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:49.138739 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:49.139242 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:49.638797 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:49.639356 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:50.138891 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:50.139404 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:50.638960 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:50.639481 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:51.139132 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:51.139711 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:51.638573 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:51.639087 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:52.138617 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:52.139164 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:52.638540 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:52.639077 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:53.138644 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:53.139189 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:53.638735 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:53.639248 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:54.138796 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:54.139315 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:54.639070 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:54.639621 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:55.139239 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:55.139768 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:55.638459 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:55.639035 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:56.138553 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:56.139122 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:56.638990 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:56.639512 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:57.139124 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:57.139655 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:57.639060 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:57.639648 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:58.139272 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:58.139910 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:58.638465 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:58.639030 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:59.138539 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:59.139116 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:04:59.638668 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:04:59.639264 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:00.138792 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:00.139382 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:00.638918 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:00.639490 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:01.139115 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:01.139696 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:01.638682 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:01.639250 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:02.138806 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:02.139322 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:02.638969 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:02.639607 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:03.139232 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:03.139819 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:03.639480 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:03.640065 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:04.138576 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:04.139154 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:04.639125 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:04.639665 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:05.139285 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:05.139863 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:05.639452 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:05.640010 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:06.138692 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:06.139299 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:06.639243 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:06.639821 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:07.139466 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:07.139991 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:07.639445 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:07.640003 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:08.138559 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:08.139098 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:08.638687 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:08.639404 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:09.138962 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:09.139459 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:09.639127 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:09.639616 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:10.139296 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:10.139824 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:10.639504 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:10.640039 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:11.138604 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:11.139129 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:11.639160 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:11.639714 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:12.139349 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:12.139860 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:12.639347 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:12.639894 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:13.138465 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:13.139005 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:13.638566 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:13.639060 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:14.139296 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:14.139849 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:14.639487 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:14.639985 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:15.138527 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:15.139085 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:15.638603 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:15.639143 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:16.139315 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:16.139849 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:16.638809 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:16.639398 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:17.139019 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0128 19:05:17.139108 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0128 19:05:17.167940 24158 cri.go:87] found id: "b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1"
I0128 19:05:17.167967 24158 cri.go:87] found id: ""
I0128 19:05:17.167976 24158 logs.go:279] 1 containers: [b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1]
I0128 19:05:17.168032 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:17.172268 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0128 19:05:17.172312 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0128 19:05:17.198328 24158 cri.go:87] found id: "e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8"
I0128 19:05:17.198350 24158 cri.go:87] found id: ""
I0128 19:05:17.198364 24158 logs.go:279] 1 containers: [e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8]
I0128 19:05:17.198400 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:17.202649 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0128 19:05:17.202687 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0128 19:05:17.229936 24158 cri.go:87] found id: ""
I0128 19:05:17.229950 24158 logs.go:279] 0 containers: []
W0128 19:05:17.229955 24158 logs.go:281] No container was found matching "coredns"
I0128 19:05:17.229960 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0128 19:05:17.229999 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0128 19:05:17.253694 24158 cri.go:87] found id: "b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17"
I0128 19:05:17.253720 24158 cri.go:87] found id: ""
I0128 19:05:17.253727 24158 logs.go:279] 1 containers: [b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17]
I0128 19:05:17.253758 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:17.257238 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0128 19:05:17.257289 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0128 19:05:17.282203 24158 cri.go:87] found id: ""
I0128 19:05:17.282220 24158 logs.go:279] 0 containers: []
W0128 19:05:17.282227 24158 logs.go:281] No container was found matching "kube-proxy"
I0128 19:05:17.282233 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0128 19:05:17.282267 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0128 19:05:17.307596 24158 cri.go:87] found id: ""
I0128 19:05:17.307615 24158 logs.go:279] 0 containers: []
W0128 19:05:17.307622 24158 logs.go:281] No container was found matching "kubernetes-dashboard"
I0128 19:05:17.307628 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0128 19:05:17.307670 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0128 19:05:17.334199 24158 cri.go:87] found id: ""
I0128 19:05:17.334217 24158 logs.go:279] 0 containers: []
W0128 19:05:17.334225 24158 logs.go:281] No container was found matching "storage-provisioner"
I0128 19:05:17.334231 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0128 19:05:17.334269 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0128 19:05:17.362264 24158 cri.go:87] found id: "74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732"
I0128 19:05:17.362286 24158 cri.go:87] found id: "08c1c5edcf22e3929f48bc99dea115b4e3cf056eb781933b71170e952a655a39"
I0128 19:05:17.362296 24158 cri.go:87] found id: ""
I0128 19:05:17.362306 24158 logs.go:279] 2 containers: [74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732 08c1c5edcf22e3929f48bc99dea115b4e3cf056eb781933b71170e952a655a39]
I0128 19:05:17.362344 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:17.365911 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:17.369375 24158 logs.go:124] Gathering logs for dmesg ...
I0128 19:05:17.369393 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0128 19:05:17.381375 24158 logs.go:124] Gathering logs for etcd [e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8] ...
I0128 19:05:17.381390 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8"
I0128 19:05:17.413331 24158 logs.go:124] Gathering logs for kube-scheduler [b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17] ...
I0128 19:05:17.413351 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17"
I0128 19:05:17.467313 24158 logs.go:124] Gathering logs for kube-controller-manager [08c1c5edcf22e3929f48bc99dea115b4e3cf056eb781933b71170e952a655a39] ...
I0128 19:05:17.467335 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08c1c5edcf22e3929f48bc99dea115b4e3cf056eb781933b71170e952a655a39"
I0128 19:05:17.503493 24158 logs.go:124] Gathering logs for kubelet ...
I0128 19:05:17.503515 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0128 19:05:17.568958 24158 logs.go:124] Gathering logs for kube-apiserver [b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1] ...
I0128 19:05:17.568986 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1"
I0128 19:05:17.598434 24158 logs.go:124] Gathering logs for kube-controller-manager [74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732] ...
I0128 19:05:17.598457 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732"
I0128 19:05:17.627470 24158 logs.go:124] Gathering logs for containerd ...
I0128 19:05:17.627494 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0128 19:05:17.667957 24158 logs.go:124] Gathering logs for container status ...
I0128 19:05:17.667982 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0128 19:05:17.700702 24158 logs.go:124] Gathering logs for describe nodes ...
I0128 19:05:17.700732 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0128 19:05:17.797926 24158 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0128 19:05:20.298061 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:20.298599 24158 api_server.go:268] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:20.639056 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0128 19:05:20.639124 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0128 19:05:20.665719 24158 cri.go:87] found id: "b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1"
I0128 19:05:20.665744 24158 cri.go:87] found id: ""
I0128 19:05:20.665752 24158 logs.go:279] 1 containers: [b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1]
I0128 19:05:20.665809 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:20.669683 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0128 19:05:20.669736 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0128 19:05:20.696484 24158 cri.go:87] found id: "e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8"
I0128 19:05:20.696503 24158 cri.go:87] found id: ""
I0128 19:05:20.696509 24158 logs.go:279] 1 containers: [e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8]
I0128 19:05:20.696543 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:20.700078 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0128 19:05:20.700112 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0128 19:05:20.728363 24158 cri.go:87] found id: ""
I0128 19:05:20.728380 24158 logs.go:279] 0 containers: []
W0128 19:05:20.728385 24158 logs.go:281] No container was found matching "coredns"
I0128 19:05:20.728390 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0128 19:05:20.728418 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0128 19:05:20.753735 24158 cri.go:87] found id: "b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17"
I0128 19:05:20.753756 24158 cri.go:87] found id: ""
I0128 19:05:20.753763 24158 logs.go:279] 1 containers: [b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17]
I0128 19:05:20.753808 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:20.757868 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0128 19:05:20.757927 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0128 19:05:20.783124 24158 cri.go:87] found id: ""
I0128 19:05:20.783143 24158 logs.go:279] 0 containers: []
W0128 19:05:20.783148 24158 logs.go:281] No container was found matching "kube-proxy"
I0128 19:05:20.783153 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0128 19:05:20.783198 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0128 19:05:20.808775 24158 cri.go:87] found id: ""
I0128 19:05:20.808792 24158 logs.go:279] 0 containers: []
W0128 19:05:20.808799 24158 logs.go:281] No container was found matching "kubernetes-dashboard"
I0128 19:05:20.808805 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0128 19:05:20.808839 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0128 19:05:20.851326 24158 cri.go:87] found id: ""
I0128 19:05:20.851343 24158 logs.go:279] 0 containers: []
W0128 19:05:20.851348 24158 logs.go:281] No container was found matching "storage-provisioner"
I0128 19:05:20.851353 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0128 19:05:20.851389 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0128 19:05:20.880128 24158 cri.go:87] found id: "74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732"
I0128 19:05:20.880151 24158 cri.go:87] found id: ""
I0128 19:05:20.880159 24158 logs.go:279] 1 containers: [74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732]
I0128 19:05:20.880194 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:20.884395 24158 logs.go:124] Gathering logs for container status ...
I0128 19:05:20.884412 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0128 19:05:20.924421 24158 logs.go:124] Gathering logs for dmesg ...
I0128 19:05:20.924441 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0128 19:05:20.937859 24158 logs.go:124] Gathering logs for kube-scheduler [b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17] ...
I0128 19:05:20.937879 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17"
I0128 19:05:21.000760 24158 logs.go:124] Gathering logs for kube-controller-manager [74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732] ...
I0128 19:05:21.000780 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732"
I0128 19:05:21.039627 24158 logs.go:124] Gathering logs for etcd [e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8] ...
I0128 19:05:21.039644 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8"
I0128 19:05:21.075591 24158 logs.go:124] Gathering logs for containerd ...
I0128 19:05:21.075609 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0128 19:05:21.113640 24158 logs.go:124] Gathering logs for kubelet ...
I0128 19:05:21.113657 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0128 19:05:21.180187 24158 logs.go:124] Gathering logs for describe nodes ...
I0128 19:05:21.180208 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0128 19:05:21.236417 24158 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0128 19:05:21.236435 24158 logs.go:124] Gathering logs for kube-apiserver [b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1] ...
I0128 19:05:21.236444 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1"
I0128 19:05:23.771839 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:24.674745 24158 api_server.go:278] https://192.168.39.121:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0128 19:05:24.674768 24158 api_server.go:102] status: https://192.168.39.121:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0128 19:05:25.139420 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0128 19:05:25.139493 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0128 19:05:25.168419 24158 cri.go:87] found id: "d45599c40f42c9f2018ecc1649778c95692d3f502d488a44d55a1c94d7328826"
I0128 19:05:25.168448 24158 cri.go:87] found id: "b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1"
I0128 19:05:25.168476 24158 cri.go:87] found id: ""
I0128 19:05:25.168485 24158 logs.go:279] 2 containers: [d45599c40f42c9f2018ecc1649778c95692d3f502d488a44d55a1c94d7328826 b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1]
I0128 19:05:25.168543 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:25.172478 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:25.175812 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0128 19:05:25.175855 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0128 19:05:25.201641 24158 cri.go:87] found id: "e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8"
I0128 19:05:25.201669 24158 cri.go:87] found id: ""
I0128 19:05:25.201676 24158 logs.go:279] 1 containers: [e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8]
I0128 19:05:25.201714 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:25.205585 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0128 19:05:25.205628 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0128 19:05:25.231104 24158 cri.go:87] found id: ""
I0128 19:05:25.231121 24158 logs.go:279] 0 containers: []
W0128 19:05:25.231129 24158 logs.go:281] No container was found matching "coredns"
I0128 19:05:25.231135 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0128 19:05:25.231180 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0128 19:05:25.264739 24158 cri.go:87] found id: "b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17"
I0128 19:05:25.264761 24158 cri.go:87] found id: ""
I0128 19:05:25.264768 24158 logs.go:279] 1 containers: [b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17]
I0128 19:05:25.264805 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:25.268287 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0128 19:05:25.268329 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0128 19:05:25.303195 24158 cri.go:87] found id: ""
I0128 19:05:25.303208 24158 logs.go:279] 0 containers: []
W0128 19:05:25.303214 24158 logs.go:281] No container was found matching "kube-proxy"
I0128 19:05:25.303218 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0128 19:05:25.303253 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0128 19:05:25.329165 24158 cri.go:87] found id: ""
I0128 19:05:25.329181 24158 logs.go:279] 0 containers: []
W0128 19:05:25.329187 24158 logs.go:281] No container was found matching "kubernetes-dashboard"
I0128 19:05:25.329192 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0128 19:05:25.329238 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0128 19:05:25.359269 24158 cri.go:87] found id: ""
I0128 19:05:25.359282 24158 logs.go:279] 0 containers: []
W0128 19:05:25.359287 24158 logs.go:281] No container was found matching "storage-provisioner"
I0128 19:05:25.359294 24158 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0128 19:05:25.359331 24158 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0128 19:05:25.385816 24158 cri.go:87] found id: "74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732"
I0128 19:05:25.385838 24158 cri.go:87] found id: ""
I0128 19:05:25.385845 24158 logs.go:279] 1 containers: [74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732]
I0128 19:05:25.385893 24158 ssh_runner.go:195] Run: which crictl
I0128 19:05:25.389445 24158 logs.go:124] Gathering logs for dmesg ...
I0128 19:05:25.389470 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0128 19:05:25.400447 24158 logs.go:124] Gathering logs for kube-apiserver [d45599c40f42c9f2018ecc1649778c95692d3f502d488a44d55a1c94d7328826] ...
I0128 19:05:25.400465 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45599c40f42c9f2018ecc1649778c95692d3f502d488a44d55a1c94d7328826"
I0128 19:05:25.438483 24158 logs.go:124] Gathering logs for kube-scheduler [b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17] ...
I0128 19:05:25.438501 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17"
I0128 19:05:25.498132 24158 logs.go:124] Gathering logs for kube-controller-manager [74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732] ...
I0128 19:05:25.498149 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732"
I0128 19:05:25.542903 24158 logs.go:124] Gathering logs for container status ...
I0128 19:05:25.542922 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0128 19:05:25.573985 24158 logs.go:124] Gathering logs for kubelet ...
I0128 19:05:25.574008 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0128 19:05:25.639621 24158 logs.go:124] Gathering logs for describe nodes ...
I0128 19:05:25.639643 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0128 19:05:25.867733 24158 logs.go:124] Gathering logs for kube-apiserver [b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1] ...
I0128 19:05:25.867758 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1"
I0128 19:05:25.898708 24158 logs.go:124] Gathering logs for etcd [e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8] ...
I0128 19:05:25.898731 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8"
I0128 19:05:25.937030 24158 logs.go:124] Gathering logs for containerd ...
I0128 19:05:25.937051 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0128 19:05:28.491701 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:05:28.497341 24158 api_server.go:278] https://192.168.39.121:8443/healthz returned 200:
ok
I0128 19:05:28.503888 24158 api_server.go:140] control plane version: v1.24.4
I0128 19:05:28.503913 24158 api_server.go:130] duration metric: took 1m11.366144979s to wait for apiserver health ...
I0128 19:05:28.503923 24158 cni.go:84] Creating CNI manager for ""
I0128 19:05:28.503932 24158 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0128 19:05:28.505916 24158 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0128 19:05:28.507356 24158 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0128 19:05:28.517581 24158 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0128 19:05:28.534387 24158 system_pods.go:43] waiting for kube-system pods to appear ...
I0128 19:05:28.543008 24158 system_pods.go:59] 8 kube-system pods found
I0128 19:05:28.543034 24158 system_pods.go:61] "coredns-6d4b75cb6d-2h7kw" [8deffa0e-9251-4b57-b5e4-e1a3a5984f97] Running
I0128 19:05:28.543040 24158 system_pods.go:61] "coredns-6d4b75cb6d-jf4vd" [5328eb4a-ece7-4b89-86ac-98d9457fc35c] Running
I0128 19:05:28.543044 24158 system_pods.go:61] "etcd-test-preload-872855" [86cd55f9-8fee-4418-8a3f-ee0173ffc9f8] Running
I0128 19:05:28.543048 24158 system_pods.go:61] "kube-apiserver-test-preload-872855" [34752e8b-e60b-40d2-8f20-a7749951329c] Running
I0128 19:05:28.543053 24158 system_pods.go:61] "kube-controller-manager-test-preload-872855" [be3211c1-a8b8-4a63-8df5-2f3c91cda62d] Running
I0128 19:05:28.543058 24158 system_pods.go:61] "kube-proxy-jklqc" [0f3bf85c-0267-4557-b1a5-32c94839d47b] Running
I0128 19:05:28.543067 24158 system_pods.go:61] "kube-scheduler-test-preload-872855" [f06b1076-8673-475b-9cb7-4726eb07cf22] Running
I0128 19:05:28.543074 24158 system_pods.go:61] "storage-provisioner" [fd3879a6-c3ca-4736-a73b-21c4d3409797] Running
I0128 19:05:28.543078 24158 system_pods.go:74] duration metric: took 8.677418ms to wait for pod list to return data ...
I0128 19:05:28.543084 24158 node_conditions.go:102] verifying NodePressure condition ...
I0128 19:05:28.547278 24158 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0128 19:05:28.547300 24158 node_conditions.go:123] node cpu capacity is 2
I0128 19:05:28.547309 24158 node_conditions.go:105] duration metric: took 4.218704ms to run NodePressure ...
I0128 19:05:28.547323 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0128 19:05:28.710331 24158 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0128 19:05:28.714530 24158 retry.go:31] will retry after 360.127272ms: kubelet not initialised
I0128 19:05:29.079791 24158 retry.go:31] will retry after 436.71002ms: kubelet not initialised
I0128 19:05:29.522528 24158 retry.go:31] will retry after 527.46423ms: kubelet not initialised
I0128 19:05:30.055373 24158 retry.go:31] will retry after 780.162888ms: kubelet not initialised
I0128 19:05:30.841606 24158 retry.go:31] will retry after 1.502072952s: kubelet not initialised
I0128 19:05:32.349297 24158 retry.go:31] will retry after 1.073826528s: kubelet not initialised
I0128 19:05:33.428766 24158 retry.go:31] will retry after 1.869541159s: kubelet not initialised
I0128 19:05:35.303494 24158 retry.go:31] will retry after 2.549945972s: kubelet not initialised
I0128 19:05:37.858784 24158 retry.go:31] will retry after 5.131623747s: kubelet not initialised
I0128 19:05:42.995892 24158 retry.go:31] will retry after 9.757045979s: kubelet not initialised
I0128 19:05:52.758027 24158 retry.go:31] will retry after 18.937774914s: kubelet not initialised
I0128 19:06:11.703350 24158 retry.go:31] will retry after 15.44552029s: kubelet not initialised
I0128 19:06:27.153996 24158 kubeadm.go:784] kubelet initialised
I0128 19:06:27.154027 24158 kubeadm.go:785] duration metric: took 58.443668064s waiting for restarted kubelet to initialise ...
I0128 19:06:27.154039 24158 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0128 19:06:27.162552 24158 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace to be "Ready" ...
I0128 19:06:29.174035 24158 pod_ready.go:102] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"False"
I0128 19:06:31.174873 24158 pod_ready.go:102] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"False"
I0128 19:06:33.176729 24158 pod_ready.go:102] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"False"
I0128 19:06:35.674003 24158 pod_ready.go:102] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"False"
I0128 19:06:38.174076 24158 pod_ready.go:102] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"False"
I0128 19:06:40.175010 24158 pod_ready.go:102] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"False"
I0128 19:06:42.675151 24158 pod_ready.go:102] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"False"
I0128 19:06:44.677677 24158 pod_ready.go:102] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"False"
I0128 19:06:45.175455 24158 pod_ready.go:92] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:45.175480 24158 pod_ready.go:81] duration metric: took 18.012908253s waiting for pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.175491 24158 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.181549 24158 pod_ready.go:92] pod "etcd-test-preload-872855" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:45.181573 24158 pod_ready.go:81] duration metric: took 6.07278ms waiting for pod "etcd-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.181585 24158 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.186056 24158 pod_ready.go:92] pod "kube-apiserver-test-preload-872855" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:45.186069 24158 pod_ready.go:81] duration metric: took 4.477612ms waiting for pod "kube-apiserver-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.186077 24158 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.189841 24158 pod_ready.go:92] pod "kube-controller-manager-test-preload-872855" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:45.189855 24158 pod_ready.go:81] duration metric: took 3.772584ms waiting for pod "kube-controller-manager-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.189866 24158 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jklqc" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.193576 24158 pod_ready.go:92] pod "kube-proxy-jklqc" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:45.193590 24158 pod_ready.go:81] duration metric: took 3.717112ms waiting for pod "kube-proxy-jklqc" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.193599 24158 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.572320 24158 pod_ready.go:92] pod "kube-scheduler-test-preload-872855" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:45.572343 24158 pod_ready.go:81] duration metric: took 378.736213ms waiting for pod "kube-scheduler-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:45.572355 24158 pod_ready.go:38] duration metric: took 18.418305809s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0128 19:06:45.572376 24158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0128 19:06:45.584785 24158 ops.go:34] apiserver oom_adj: -16
I0128 19:06:45.584798 24158 kubeadm.go:637] restartCluster took 2m41.056528779s
I0128 19:06:45.584804 24158 kubeadm.go:403] StartCluster complete in 2m41.095355397s
I0128 19:06:45.584840 24158 settings.go:142] acquiring lock: {Name:mkbebcb132359eb6b9b805c34cb0b8e9b8bebe37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 19:06:45.584969 24158 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15565-3428/kubeconfig
I0128 19:06:45.585536 24158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3428/kubeconfig: {Name:mk68c0cdc51b4c3db12c527d0a0aac1339ffa973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 19:06:45.585750 24158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0128 19:06:45.585915 24158 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0128 19:06:45.585993 24158 addons.go:65] Setting storage-provisioner=true in profile "test-preload-872855"
I0128 19:06:45.586010 24158 addons.go:227] Setting addon storage-provisioner=true in "test-preload-872855"
I0128 19:06:45.586014 24158 config.go:180] Loaded profile config "test-preload-872855": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
W0128 19:06:45.586023 24158 addons.go:236] addon storage-provisioner should already be in state true
I0128 19:06:45.586024 24158 addons.go:65] Setting default-storageclass=true in profile "test-preload-872855"
I0128 19:06:45.586043 24158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-872855"
I0128 19:06:45.586085 24158 host.go:66] Checking if "test-preload-872855" exists ...
I0128 19:06:45.586311 24158 kapi.go:59] client config for test-preload-872855: &rest.Config{Host:"https://192.168.39.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0128 19:06:45.586455 24158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0128 19:06:45.586489 24158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0128 19:06:45.586507 24158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0128 19:06:45.586532 24158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0128 19:06:45.588834 24158 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-872855" context rescaled to 1 replicas
I0128 19:06:45.588866 24158 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0128 19:06:45.590805 24158 out.go:177] * Verifying Kubernetes components...
I0128 19:06:45.592180 24158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0128 19:06:45.602355 24158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
I0128 19:06:45.602746 24158 main.go:141] libmachine: () Calling .GetVersion
I0128 19:06:45.603293 24158 main.go:141] libmachine: Using API Version 1
I0128 19:06:45.603314 24158 main.go:141] libmachine: () Calling .SetConfigRaw
I0128 19:06:45.603703 24158 main.go:141] libmachine: () Calling .GetMachineName
I0128 19:06:45.604246 24158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0128 19:06:45.604283 24158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0128 19:06:45.604821 24158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
I0128 19:06:45.605244 24158 main.go:141] libmachine: () Calling .GetVersion
I0128 19:06:45.605724 24158 main.go:141] libmachine: Using API Version 1
I0128 19:06:45.605743 24158 main.go:141] libmachine: () Calling .SetConfigRaw
I0128 19:06:45.606011 24158 main.go:141] libmachine: () Calling .GetMachineName
I0128 19:06:45.606229 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetState
I0128 19:06:45.608770 24158 kapi.go:59] client config for test-preload-872855: &rest.Config{Host:"https://192.168.39.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/profiles/test-preload-872855/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0128 19:06:45.616603 24158 addons.go:227] Setting addon default-storageclass=true in "test-preload-872855"
W0128 19:06:45.616625 24158 addons.go:236] addon default-storageclass should already be in state true
I0128 19:06:45.616651 24158 host.go:66] Checking if "test-preload-872855" exists ...
I0128 19:06:45.617015 24158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0128 19:06:45.617047 24158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0128 19:06:45.619111 24158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33335
I0128 19:06:45.619452 24158 main.go:141] libmachine: () Calling .GetVersion
I0128 19:06:45.619829 24158 main.go:141] libmachine: Using API Version 1
I0128 19:06:45.619844 24158 main.go:141] libmachine: () Calling .SetConfigRaw
I0128 19:06:45.620191 24158 main.go:141] libmachine: () Calling .GetMachineName
I0128 19:06:45.620370 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetState
I0128 19:06:45.621974 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:06:45.624323 24158 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0128 19:06:45.625891 24158 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0128 19:06:45.625906 24158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0128 19:06:45.625919 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:06:45.628555 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:06:45.629047 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:06:45.629075 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:06:45.629232 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:06:45.629421 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:06:45.629578 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:06:45.629728 24158 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/machines/test-preload-872855/id_rsa Username:docker}
I0128 19:06:45.632328 24158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
I0128 19:06:45.632696 24158 main.go:141] libmachine: () Calling .GetVersion
I0128 19:06:45.633087 24158 main.go:141] libmachine: Using API Version 1
I0128 19:06:45.633108 24158 main.go:141] libmachine: () Calling .SetConfigRaw
I0128 19:06:45.633350 24158 main.go:141] libmachine: () Calling .GetMachineName
I0128 19:06:45.633879 24158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0128 19:06:45.633914 24158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0128 19:06:45.647619 24158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
I0128 19:06:45.647987 24158 main.go:141] libmachine: () Calling .GetVersion
I0128 19:06:45.648376 24158 main.go:141] libmachine: Using API Version 1
I0128 19:06:45.648393 24158 main.go:141] libmachine: () Calling .SetConfigRaw
I0128 19:06:45.648707 24158 main.go:141] libmachine: () Calling .GetMachineName
I0128 19:06:45.648877 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetState
I0128 19:06:45.650470 24158 main.go:141] libmachine: (test-preload-872855) Calling .DriverName
I0128 19:06:45.650670 24158 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0128 19:06:45.650681 24158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0128 19:06:45.650693 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHHostname
I0128 19:06:45.653433 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:06:45.653833 24158 main.go:141] libmachine: (test-preload-872855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:86:d7", ip: ""} in network mk-test-preload-872855: {Iface:virbr1 ExpiryTime:2023-01-28 20:03:42 +0000 UTC Type:0 Mac:52:54:00:dc:86:d7 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-872855 Clientid:01:52:54:00:dc:86:d7}
I0128 19:06:45.653852 24158 main.go:141] libmachine: (test-preload-872855) DBG | domain test-preload-872855 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:86:d7 in network mk-test-preload-872855
I0128 19:06:45.653998 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHPort
I0128 19:06:45.654142 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHKeyPath
I0128 19:06:45.654334 24158 main.go:141] libmachine: (test-preload-872855) Calling .GetSSHUsername
I0128 19:06:45.654512 24158 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3428/.minikube/machines/test-preload-872855/id_rsa Username:docker}
I0128 19:06:45.684651 24158 node_ready.go:35] waiting up to 6m0s for node "test-preload-872855" to be "Ready" ...
I0128 19:06:45.684808 24158 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0128 19:06:45.751202 24158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0128 19:06:45.771852 24158 node_ready.go:49] node "test-preload-872855" has status "Ready":"True"
I0128 19:06:45.771867 24158 node_ready.go:38] duration metric: took 87.18273ms waiting for node "test-preload-872855" to be "Ready" ...
I0128 19:06:45.771875 24158 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0128 19:06:45.795523 24158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0128 19:06:45.975144 24158 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace to be "Ready" ...
I0128 19:06:46.404299 24158 pod_ready.go:92] pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:46.404318 24158 pod_ready.go:81] duration metric: took 429.15252ms waiting for pod "coredns-6d4b75cb6d-jf4vd" in "kube-system" namespace to be "Ready" ...
I0128 19:06:46.404327 24158 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:46.559210 24158 main.go:141] libmachine: Making call to close driver server
I0128 19:06:46.559230 24158 main.go:141] libmachine: (test-preload-872855) Calling .Close
I0128 19:06:46.559308 24158 main.go:141] libmachine: Making call to close driver server
I0128 19:06:46.559340 24158 main.go:141] libmachine: (test-preload-872855) Calling .Close
I0128 19:06:46.559528 24158 main.go:141] libmachine: (test-preload-872855) DBG | Closing plugin on server side
I0128 19:06:46.559569 24158 main.go:141] libmachine: Successfully made call to close driver server
I0128 19:06:46.559579 24158 main.go:141] libmachine: Making call to close connection to plugin binary
I0128 19:06:46.559587 24158 main.go:141] libmachine: Making call to close driver server
I0128 19:06:46.559594 24158 main.go:141] libmachine: (test-preload-872855) Calling .Close
I0128 19:06:46.559593 24158 main.go:141] libmachine: Successfully made call to close driver server
I0128 19:06:46.559606 24158 main.go:141] libmachine: Making call to close connection to plugin binary
I0128 19:06:46.559617 24158 main.go:141] libmachine: Making call to close driver server
I0128 19:06:46.559627 24158 main.go:141] libmachine: (test-preload-872855) Calling .Close
I0128 19:06:46.559868 24158 main.go:141] libmachine: Successfully made call to close driver server
I0128 19:06:46.559885 24158 main.go:141] libmachine: Making call to close connection to plugin binary
I0128 19:06:46.559900 24158 main.go:141] libmachine: (test-preload-872855) DBG | Closing plugin on server side
I0128 19:06:46.559924 24158 main.go:141] libmachine: Successfully made call to close driver server
I0128 19:06:46.559945 24158 main.go:141] libmachine: Making call to close connection to plugin binary
I0128 19:06:46.559969 24158 main.go:141] libmachine: Making call to close driver server
I0128 19:06:46.559992 24158 main.go:141] libmachine: (test-preload-872855) Calling .Close
I0128 19:06:46.560167 24158 main.go:141] libmachine: Successfully made call to close driver server
I0128 19:06:46.560183 24158 main.go:141] libmachine: Making call to close connection to plugin binary
I0128 19:06:46.560225 24158 main.go:141] libmachine: (test-preload-872855) DBG | Closing plugin on server side
I0128 19:06:46.562253 24158 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0128 19:06:46.563552 24158 addons.go:492] enable addons completed in 977.634581ms: enabled=[storage-provisioner default-storageclass]
I0128 19:06:46.772002 24158 pod_ready.go:92] pod "etcd-test-preload-872855" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:46.772017 24158 pod_ready.go:81] duration metric: took 367.684681ms waiting for pod "etcd-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:46.772025 24158 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:47.173675 24158 pod_ready.go:92] pod "kube-apiserver-test-preload-872855" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:47.173694 24158 pod_ready.go:81] duration metric: took 401.661479ms waiting for pod "kube-apiserver-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:47.173707 24158 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:47.572199 24158 pod_ready.go:92] pod "kube-controller-manager-test-preload-872855" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:47.572217 24158 pod_ready.go:81] duration metric: took 398.50495ms waiting for pod "kube-controller-manager-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:47.572226 24158 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jklqc" in "kube-system" namespace to be "Ready" ...
I0128 19:06:47.973463 24158 pod_ready.go:92] pod "kube-proxy-jklqc" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:47.973483 24158 pod_ready.go:81] duration metric: took 401.252121ms waiting for pod "kube-proxy-jklqc" in "kube-system" namespace to be "Ready" ...
I0128 19:06:47.973491 24158 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:48.371896 24158 pod_ready.go:92] pod "kube-scheduler-test-preload-872855" in "kube-system" namespace has status "Ready":"True"
I0128 19:06:48.371915 24158 pod_ready.go:81] duration metric: took 398.417898ms waiting for pod "kube-scheduler-test-preload-872855" in "kube-system" namespace to be "Ready" ...
I0128 19:06:48.371924 24158 pod_ready.go:38] duration metric: took 2.600041866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0128 19:06:48.371941 24158 api_server.go:51] waiting for apiserver process to appear ...
I0128 19:06:48.371972 24158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0128 19:06:48.384907 24158 api_server.go:71] duration metric: took 2.796016437s to wait for apiserver process to appear ...
I0128 19:06:48.384923 24158 api_server.go:87] waiting for apiserver healthz status ...
I0128 19:06:48.384930 24158 api_server.go:252] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
I0128 19:06:48.389408 24158 api_server.go:278] https://192.168.39.121:8443/healthz returned 200:
ok
I0128 19:06:48.390215 24158 api_server.go:140] control plane version: v1.24.4
I0128 19:06:48.390230 24158 api_server.go:130] duration metric: took 5.301984ms to wait for apiserver health ...
I0128 19:06:48.390236 24158 system_pods.go:43] waiting for kube-system pods to appear ...
I0128 19:06:48.574985 24158 system_pods.go:59] 7 kube-system pods found
I0128 19:06:48.575007 24158 system_pods.go:61] "coredns-6d4b75cb6d-jf4vd" [5328eb4a-ece7-4b89-86ac-98d9457fc35c] Running
I0128 19:06:48.575012 24158 system_pods.go:61] "etcd-test-preload-872855" [86cd55f9-8fee-4418-8a3f-ee0173ffc9f8] Running
I0128 19:06:48.575017 24158 system_pods.go:61] "kube-apiserver-test-preload-872855" [34752e8b-e60b-40d2-8f20-a7749951329c] Running
I0128 19:06:48.575021 24158 system_pods.go:61] "kube-controller-manager-test-preload-872855" [be3211c1-a8b8-4a63-8df5-2f3c91cda62d] Running
I0128 19:06:48.575025 24158 system_pods.go:61] "kube-proxy-jklqc" [0f3bf85c-0267-4557-b1a5-32c94839d47b] Running
I0128 19:06:48.575029 24158 system_pods.go:61] "kube-scheduler-test-preload-872855" [f06b1076-8673-475b-9cb7-4726eb07cf22] Running
I0128 19:06:48.575032 24158 system_pods.go:61] "storage-provisioner" [fd3879a6-c3ca-4736-a73b-21c4d3409797] Running
I0128 19:06:48.575038 24158 system_pods.go:74] duration metric: took 184.796349ms to wait for pod list to return data ...
I0128 19:06:48.575044 24158 default_sa.go:34] waiting for default service account to be created ...
I0128 19:06:48.771506 24158 default_sa.go:45] found service account: "default"
I0128 19:06:48.771521 24158 default_sa.go:55] duration metric: took 196.473144ms for default service account to be created ...
I0128 19:06:48.771527 24158 system_pods.go:116] waiting for k8s-apps to be running ...
I0128 19:06:48.975129 24158 system_pods.go:86] 7 kube-system pods found
I0128 19:06:48.975148 24158 system_pods.go:89] "coredns-6d4b75cb6d-jf4vd" [5328eb4a-ece7-4b89-86ac-98d9457fc35c] Running
I0128 19:06:48.975153 24158 system_pods.go:89] "etcd-test-preload-872855" [86cd55f9-8fee-4418-8a3f-ee0173ffc9f8] Running
I0128 19:06:48.975158 24158 system_pods.go:89] "kube-apiserver-test-preload-872855" [34752e8b-e60b-40d2-8f20-a7749951329c] Running
I0128 19:06:48.975162 24158 system_pods.go:89] "kube-controller-manager-test-preload-872855" [be3211c1-a8b8-4a63-8df5-2f3c91cda62d] Running
I0128 19:06:48.975167 24158 system_pods.go:89] "kube-proxy-jklqc" [0f3bf85c-0267-4557-b1a5-32c94839d47b] Running
I0128 19:06:48.975171 24158 system_pods.go:89] "kube-scheduler-test-preload-872855" [f06b1076-8673-475b-9cb7-4726eb07cf22] Running
I0128 19:06:48.975174 24158 system_pods.go:89] "storage-provisioner" [fd3879a6-c3ca-4736-a73b-21c4d3409797] Running
I0128 19:06:48.975180 24158 system_pods.go:126] duration metric: took 203.649276ms to wait for k8s-apps to be running ...
I0128 19:06:48.975187 24158 system_svc.go:44] waiting for kubelet service to be running ....
I0128 19:06:48.975224 24158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0128 19:06:48.988741 24158 system_svc.go:56] duration metric: took 13.548708ms WaitForService to wait for kubelet.
I0128 19:06:48.988762 24158 kubeadm.go:578] duration metric: took 3.399873567s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0128 19:06:48.988781 24158 node_conditions.go:102] verifying NodePressure condition ...
I0128 19:06:49.173439 24158 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0128 19:06:49.173459 24158 node_conditions.go:123] node cpu capacity is 2
I0128 19:06:49.173468 24158 node_conditions.go:105] duration metric: took 184.683049ms to run NodePressure ...
I0128 19:06:49.173477 24158 start.go:228] waiting for startup goroutines ...
I0128 19:06:49.173483 24158 start.go:233] waiting for cluster config update ...
I0128 19:06:49.173491 24158 start.go:240] writing updated cluster config ...
I0128 19:06:49.173724 24158 ssh_runner.go:195] Run: rm -f paused
I0128 19:06:49.222083 24158 start.go:555] kubectl: 1.26.1, cluster: 1.24.4 (minor skew: 2)
I0128 19:06:49.224200 24158 out.go:177]
W0128 19:06:49.225489 24158 out.go:239] ! /usr/local/bin/kubectl is version 1.26.1, which may have incompatibilities with Kubernetes 1.24.4.
I0128 19:06:49.226980 24158 out.go:177] - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
I0128 19:06:49.228555 24158 out.go:177] * Done! kubectl is now configured to use "test-preload-872855" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
cd24d8452d32b 6e38f40d628db 14 seconds ago Running storage-provisioner 2 fae2622f8914b
8ffbec82e20aa a4ca41631cc7a 44 seconds ago Running coredns 1 3db1b0eef0643
7864fb8f66b96 7a53d1e08ef58 45 seconds ago Running kube-proxy 1 b08a2e88e24af
a3ffff28ae278 6e38f40d628db 45 seconds ago Exited storage-provisioner 1 fae2622f8914b
831ffcaab0235 1f99cb6da9a82 About a minute ago Running kube-controller-manager 3 64ed8aa41b113
d45599c40f42c 6cab9d1bed1be About a minute ago Running kube-apiserver 2 aabdfa8dcbc75
74e81bdcc0fbf 1f99cb6da9a82 About a minute ago Exited kube-controller-manager 2 64ed8aa41b113
e415685fb3701 aebe758cef4cd 2 minutes ago Running etcd 1 b0a032d070093
b5cf17626e418 03fa22539fc1c 2 minutes ago Running kube-scheduler 1 32e92f9fd5a57
b59adb08bca0f 6cab9d1bed1be 2 minutes ago Exited kube-apiserver 1 aabdfa8dcbc75
*
* ==> containerd <==
* -- Journal begins at Sat 2023-01-28 19:03:42 UTC, ends at Sat 2023-01-28 19:06:50 UTC. --
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.095456161Z" level=info msg="CreateContainer within sandbox \"b08a2e88e24afbf50454a56618b3ae27da0889031b1425f8c75cda17f67a445b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.101658117Z" level=info msg="CreateContainer within sandbox \"fae2622f8914b5defbd07432dd21318bd3d759f35ad005ab7cda9c283ca4e4d8\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"a3ffff28ae278eaab79982ea1ef72cd374fdb3187d81007247a6e62174f91bcb\""
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.102446399Z" level=info msg="StartContainer for \"a3ffff28ae278eaab79982ea1ef72cd374fdb3187d81007247a6e62174f91bcb\""
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.107316209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.107461777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.107506548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.107840422Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3db1b0eef0643c701e40015164721340d2f3a54523c46be97dbc73513ce50b9b pid=1831 runtime=io.containerd.runc.v2
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.135120149Z" level=info msg="CreateContainer within sandbox \"b08a2e88e24afbf50454a56618b3ae27da0889031b1425f8c75cda17f67a445b\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"7864fb8f66b9617a065d367b255dc6779802e5946630732db02a22c4ebea074f\""
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.137393105Z" level=info msg="StartContainer for \"7864fb8f66b9617a065d367b255dc6779802e5946630732db02a22c4ebea074f\""
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.343057408Z" level=info msg="StartContainer for \"a3ffff28ae278eaab79982ea1ef72cd374fdb3187d81007247a6e62174f91bcb\" returns successfully"
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.385024294Z" level=info msg="StartContainer for \"7864fb8f66b9617a065d367b255dc6779802e5946630732db02a22c4ebea074f\" returns successfully"
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.581148011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-jf4vd,Uid:5328eb4a-ece7-4b89-86ac-98d9457fc35c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3db1b0eef0643c701e40015164721340d2f3a54523c46be97dbc73513ce50b9b\""
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.585614112Z" level=info msg="CreateContainer within sandbox \"3db1b0eef0643c701e40015164721340d2f3a54523c46be97dbc73513ce50b9b\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.620932557Z" level=info msg="CreateContainer within sandbox \"3db1b0eef0643c701e40015164721340d2f3a54523c46be97dbc73513ce50b9b\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"8ffbec82e20aa8ae85da54f1a11dfb8cc2258d796cc8a1b4cfccdf272cecae46\""
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.622069518Z" level=info msg="StartContainer for \"8ffbec82e20aa8ae85da54f1a11dfb8cc2258d796cc8a1b4cfccdf272cecae46\""
Jan 28 19:06:05 test-preload-872855 containerd[634]: time="2023-01-28T19:06:05.767461342Z" level=info msg="StartContainer for \"8ffbec82e20aa8ae85da54f1a11dfb8cc2258d796cc8a1b4cfccdf272cecae46\" returns successfully"
Jan 28 19:06:15 test-preload-872855 containerd[634]: time="2023-01-28T19:06:15.601167048Z" level=error msg="ContainerStatus for \"2b1130a4378cb6a91bf011490ea07a45a3f0f1aa7698a1277573f9b4340050ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b1130a4378cb6a91bf011490ea07a45a3f0f1aa7698a1277573f9b4340050ee\": not found"
Jan 28 19:06:35 test-preload-872855 containerd[634]: time="2023-01-28T19:06:35.537123329Z" level=info msg="shim disconnected" id=a3ffff28ae278eaab79982ea1ef72cd374fdb3187d81007247a6e62174f91bcb
Jan 28 19:06:35 test-preload-872855 containerd[634]: time="2023-01-28T19:06:35.537202272Z" level=warning msg="cleaning up after shim disconnected" id=a3ffff28ae278eaab79982ea1ef72cd374fdb3187d81007247a6e62174f91bcb namespace=k8s.io
Jan 28 19:06:35 test-preload-872855 containerd[634]: time="2023-01-28T19:06:35.537215063Z" level=info msg="cleaning up dead shim"
Jan 28 19:06:35 test-preload-872855 containerd[634]: time="2023-01-28T19:06:35.549494905Z" level=warning msg="cleanup warnings time=\"2023-01-28T19:06:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2075 runtime=io.containerd.runc.v2\n"
Jan 28 19:06:36 test-preload-872855 containerd[634]: time="2023-01-28T19:06:36.015687957Z" level=info msg="CreateContainer within sandbox \"fae2622f8914b5defbd07432dd21318bd3d759f35ad005ab7cda9c283ca4e4d8\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
Jan 28 19:06:36 test-preload-872855 containerd[634]: time="2023-01-28T19:06:36.045032321Z" level=info msg="CreateContainer within sandbox \"fae2622f8914b5defbd07432dd21318bd3d759f35ad005ab7cda9c283ca4e4d8\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"cd24d8452d32b21eea5dececb00f7fca0f185fbb7119ff323b104a650862d3d2\""
Jan 28 19:06:36 test-preload-872855 containerd[634]: time="2023-01-28T19:06:36.045747532Z" level=info msg="StartContainer for \"cd24d8452d32b21eea5dececb00f7fca0f185fbb7119ff323b104a650862d3d2\""
Jan 28 19:06:36 test-preload-872855 containerd[634]: time="2023-01-28T19:06:36.123276445Z" level=info msg="StartContainer for \"cd24d8452d32b21eea5dececb00f7fca0f185fbb7119ff323b104a650862d3d2\" returns successfully"
*
* ==> coredns [8ffbec82e20aa8ae85da54f1a11dfb8cc2258d796cc8a1b4cfccdf272cecae46] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] 127.0.0.1:52547 - 63081 "HINFO IN 3129873958102617419.7647548758799673047. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008286422s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
*
* ==> describe nodes <==
* Name: test-preload-872855
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-preload-872855
kubernetes.io/os=linux
minikube.k8s.io/commit=0b7a59349a2d83a39298292bdec73f3c39ac1090
minikube.k8s.io/name=test-preload-872855
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_28T19_02_46_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 28 Jan 2023 19:02:43 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: test-preload-872855
AcquireTime: <unset>
RenewTime: Sat, 28 Jan 2023 19:06:46 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 28 Jan 2023 19:05:39 +0000 Sat, 28 Jan 2023 19:02:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 28 Jan 2023 19:05:39 +0000 Sat, 28 Jan 2023 19:02:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 28 Jan 2023 19:05:39 +0000 Sat, 28 Jan 2023 19:02:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 28 Jan 2023 19:05:39 +0000 Sat, 28 Jan 2023 19:05:39 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.121
Hostname: test-preload-872855
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 40c181bcb67943cab7f329f3e565c63b
System UUID: 40c181bc-b679-43ca-b7f3-29f3e565c63b
Boot ID: 4ba97418-64a5-45df-a45b-280ebb238df8
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.15
Kubelet Version: v1.24.4
Kube-Proxy Version: v1.24.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-jf4vd 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 3m52s
kube-system etcd-test-preload-872855 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 4m4s
kube-system kube-apiserver-test-preload-872855 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m4s
kube-system kube-controller-manager-test-preload-872855 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m4s
kube-system kube-proxy-jklqc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m52s
kube-system kube-scheduler-test-preload-872855 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m50s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m50s kube-proxy
Normal Starting 44s kube-proxy
Normal NodeAllocatableEnforced 4m14s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m14s (x4 over 4m14s) kubelet Node test-preload-872855 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 4m14s (x4 over 4m14s) kubelet Node test-preload-872855 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 4m14s (x4 over 4m14s) kubelet Node test-preload-872855 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 4m4s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m4s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m4s kubelet Node test-preload-872855 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m4s kubelet Node test-preload-872855 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m4s kubelet Node test-preload-872855 status is now: NodeHasSufficientPID
Normal NodeReady 3m54s kubelet Node test-preload-872855 status is now: NodeReady
Normal RegisteredNode 3m53s node-controller Node test-preload-872855 event: Registered Node test-preload-872855 in Controller
Normal Starting 2m35s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m35s (x8 over 2m35s) kubelet Node test-preload-872855 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m35s (x8 over 2m35s) kubelet Node test-preload-872855 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m35s (x7 over 2m35s) kubelet Node test-preload-872855 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m35s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 58s node-controller Node test-preload-872855 event: Registered Node test-preload-872855 in Controller
*
* ==> dmesg <==
* [Jan28 19:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.069737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.859162] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.128214] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.141505] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.507208] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +14.930215] systemd-fstab-generator[530]: Ignoring "noauto" for root device
[Jan28 19:04] systemd-fstab-generator[562]: Ignoring "noauto" for root device
[ +0.102256] systemd-fstab-generator[573]: Ignoring "noauto" for root device
[ +0.118111] systemd-fstab-generator[586]: Ignoring "noauto" for root device
[ +0.101659] systemd-fstab-generator[597]: Ignoring "noauto" for root device
[ +0.222907] systemd-fstab-generator[625]: Ignoring "noauto" for root device
[ +12.730872] systemd-fstab-generator[819]: Ignoring "noauto" for root device
[Jan28 19:06] kauditd_printk_skb: 7 callbacks suppressed
[ +40.008473] kauditd_printk_skb: 15 callbacks suppressed
*
* ==> etcd [e415685fb37015eb122fc9f45808c70579d7edc2eaaf0b54e52539df8331a1b8] <==
* {"level":"info","ts":"2023-01-28T19:04:44.123Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"cbdf275f553df7c2","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-28T19:04:44.124Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-01-28T19:04:44.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
{"level":"info","ts":"2023-01-28T19:04:44.124Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"]}
{"level":"info","ts":"2023-01-28T19:04:44.124Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T19:04:44.126Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-28T19:04:44.126Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T19:04:44.128Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-28T19:04:44.128Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-28T19:04:44.128Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.121:2380"}
{"level":"info","ts":"2023-01-28T19:04:44.128Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.121:2380"}
{"level":"info","ts":"2023-01-28T19:04:45.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
{"level":"info","ts":"2023-01-28T19:04:45.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
{"level":"info","ts":"2023-01-28T19:04:45.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
{"level":"info","ts":"2023-01-28T19:04:45.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became candidate at term 3"}
{"level":"info","ts":"2023-01-28T19:04:45.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
{"level":"info","ts":"2023-01-28T19:04:45.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became leader at term 3"}
{"level":"info","ts":"2023-01-28T19:04:45.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
{"level":"info","ts":"2023-01-28T19:04:45.414Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:test-preload-872855 ClientURLs:[https://192.168.39.121:2379]}","request-path":"/0/members/cbdf275f553df7c2/attributes","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-28T19:04:45.415Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-28T19:04:45.416Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.121:2379"}
{"level":"info","ts":"2023-01-28T19:04:45.416Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-28T19:04:45.417Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-28T19:04:45.417Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-28T19:04:45.418Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 19:06:50 up 3 min, 0 users, load average: 0.41, 0.26, 0.10
Linux test-preload-872855 5.10.57 #1 SMP Fri Jan 27 18:05:35 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [b59adb08bca0f9ab0e4bd072c46cdf894ce4d7e87ed13272fa5efb0b190e9ed1] <==
* I0128 19:04:16.871836 1 server.go:558] external host was not specified, using 192.168.39.121
I0128 19:04:16.872542 1 server.go:158] Version: v1.24.4
I0128 19:04:16.872565 1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0128 19:04:17.213853 1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
I0128 19:04:17.214928 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0128 19:04:17.215057 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0128 19:04:17.216443 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0128 19:04:17.216549 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0128 19:04:17.220088 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:18.214373 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:18.221036 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:19.215103 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:20.019546 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:20.877622 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:22.142276 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:23.050319 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:26.372559 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:26.401136 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:33.306393 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0128 19:04:34.060227 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
E0128 19:04:37.220637 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [d45599c40f42c9f2018ecc1649778c95692d3f502d488a44d55a1c94d7328826] <==
* I0128 19:05:24.623343 1 naming_controller.go:291] Starting NamingConditionController
I0128 19:05:24.623352 1 establishing_controller.go:76] Starting EstablishingController
I0128 19:05:24.623358 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0128 19:05:24.623366 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0128 19:05:24.623373 1 crd_finalizer.go:266] Starting CRDFinalizer
I0128 19:05:24.651867 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0128 19:05:24.651903 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0128 19:05:24.715585 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0128 19:05:24.716233 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0128 19:05:24.716657 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0128 19:05:24.722726 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0128 19:05:24.733067 1 cache.go:39] Caches are synced for autoregister controller
I0128 19:05:24.740691 1 shared_informer.go:262] Caches are synced for node_authorizer
I0128 19:05:24.752686 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0128 19:05:24.783568 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0128 19:05:25.277492 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0128 19:05:25.629335 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0128 19:05:28.617603 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0128 19:05:28.626913 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0128 19:05:28.664136 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0128 19:05:28.679722 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0128 19:05:28.685910 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0128 19:06:05.263752 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0128 19:06:05.266302 1 controller.go:611] quota admission added evaluator for: endpoints
I0128 19:06:05.736759 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-controller-manager [74e81bdcc0fbf79c3a425554fa3ab177928e7b293142dc0f54a4a83f097f6732] <==
* vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x2f6
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run.func1()
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:165 +0x3c
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x3931a60?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x4d010e0, 0xc000dfd560}, 0x1, 0xc0000c2600)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0xdf8475800, 0x0, 0xa0?, 0xc00006efd0?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x4d2abb0?, 0xc00047b280?, 0xc0007d3f20?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:164 +0x372
goroutine 149 [syscall]:
syscall.Syscall6(0xe8, 0xd, 0xc000ea5c14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
/usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5
k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0x8b39a1bb88a0dd1f?, {0xc000ea5c14?, 0x2646d8bdbcd7a66?, 0xe857541aac2ce9db?}, 0xbfaeb1d5b3f94173?)
vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:56 +0x58
k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0001e1240)
vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d
k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0000bf1d0)
vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x26e
created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c5
*
* ==> kube-controller-manager [831ffcaab0235482f72ae0d5b00314953921106de9c6697ccd2dbaf79963a2e6] <==
* I0128 19:05:52.433147 1 shared_informer.go:262] Caches are synced for GC
I0128 19:05:52.443781 1 shared_informer.go:262] Caches are synced for stateful set
I0128 19:05:52.449137 1 shared_informer.go:262] Caches are synced for PVC protection
I0128 19:05:52.449187 1 shared_informer.go:262] Caches are synced for job
I0128 19:05:52.451491 1 shared_informer.go:262] Caches are synced for deployment
I0128 19:05:52.455459 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0128 19:05:52.469044 1 shared_informer.go:262] Caches are synced for HPA
I0128 19:05:52.479684 1 shared_informer.go:262] Caches are synced for taint
I0128 19:05:52.479824 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
W0128 19:05:52.480032 1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-872855. Assuming now as a timestamp.
I0128 19:05:52.480107 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0128 19:05:52.481021 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0128 19:05:52.481838 1 event.go:294] "Event occurred" object="test-preload-872855" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-872855 event: Registered Node test-preload-872855 in Controller"
I0128 19:05:52.493753 1 shared_informer.go:262] Caches are synced for resource quota
I0128 19:05:52.500118 1 shared_informer.go:262] Caches are synced for disruption
I0128 19:05:52.500200 1 disruption.go:371] Sending events to api server.
I0128 19:05:52.503580 1 shared_informer.go:262] Caches are synced for daemon sets
I0128 19:05:52.504134 1 shared_informer.go:262] Caches are synced for resource quota
I0128 19:05:52.507651 1 shared_informer.go:262] Caches are synced for attach detach
I0128 19:05:52.516437 1 shared_informer.go:262] Caches are synced for ReplicationController
I0128 19:05:52.518760 1 shared_informer.go:262] Caches are synced for persistent volume
I0128 19:05:52.524296 1 shared_informer.go:262] Caches are synced for endpoint
I0128 19:05:52.919147 1 shared_informer.go:262] Caches are synced for garbage collector
I0128 19:05:52.939776 1 shared_informer.go:262] Caches are synced for garbage collector
I0128 19:05:52.939889 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [7864fb8f66b9617a065d367b255dc6779802e5946630732db02a22c4ebea074f] <==
* I0128 19:06:05.624617 1 node.go:163] Successfully retrieved node IP: 192.168.39.121
I0128 19:06:05.624893 1 server_others.go:138] "Detected node IP" address="192.168.39.121"
I0128 19:06:05.625370 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0128 19:06:05.728631 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0128 19:06:05.728648 1 server_others.go:206] "Using iptables Proxier"
I0128 19:06:05.729304 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0128 19:06:05.730271 1 server.go:661] "Version info" version="v1.24.4"
I0128 19:06:05.730285 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0128 19:06:05.731696 1 config.go:317] "Starting service config controller"
I0128 19:06:05.731779 1 shared_informer.go:255] Waiting for caches to sync for service config
I0128 19:06:05.731801 1 config.go:226] "Starting endpoint slice config controller"
I0128 19:06:05.731805 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0128 19:06:05.733326 1 config.go:444] "Starting node config controller"
I0128 19:06:05.733334 1 shared_informer.go:255] Waiting for caches to sync for node config
I0128 19:06:05.833007 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0128 19:06:05.833189 1 shared_informer.go:262] Caches are synced for service config
I0128 19:06:05.833804 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [b5cf17626e418a608a0d99fc3d7ed2be7b2e801582804fa73d32a57b01a7ee17] <==
* W0128 19:05:08.844684 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.39.121:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:08.844751 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.121:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:10.305137 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.39.121:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:10.305327 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.121:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:10.508830 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.39.121:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:10.509112 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.121:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:10.641907 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.39.121:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:10.642099 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.121:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:13.203395 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.39.121:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:13.203425 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.121:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:14.851434 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.121:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:14.851462 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.121:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:15.592103 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.39.121:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:15.592187 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.121:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:15.817030 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.39.121:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:15.817073 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.121:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:16.040374 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.121:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:16.040444 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.121:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:16.336816 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.121:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:16.336890 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.121:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:17.725659 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.39.121:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:17.725749 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.121:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
W0128 19:05:19.210302 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.39.121:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
E0128 19:05:19.210335 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.121:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.121:8443: connect: connection refused
I0128 19:05:59.435870 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Sat 2023-01-28 19:03:42 UTC, ends at Sat 2023-01-28 19:06:50 UTC. --
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.646650 825 topology_manager.go:200] "Topology Admit Handler"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.647048 825 topology_manager.go:200] "Topology Admit Handler"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.647252 825 topology_manager.go:200] "Topology Admit Handler"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.647443 825 topology_manager.go:200] "Topology Admit Handler"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.755816 825 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f3bf85c-0267-4557-b1a5-32c94839d47b-lib-modules\") pod \"kube-proxy-jklqc\" (UID: \"0f3bf85c-0267-4557-b1a5-32c94839d47b\") " pod="kube-system/kube-proxy-jklqc"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.756056 825 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0f3bf85c-0267-4557-b1a5-32c94839d47b-kube-proxy\") pod \"kube-proxy-jklqc\" (UID: \"0f3bf85c-0267-4557-b1a5-32c94839d47b\") " pod="kube-system/kube-proxy-jklqc"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.756204 825 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56mt\" (UniqueName: \"kubernetes.io/projected/5328eb4a-ece7-4b89-86ac-98d9457fc35c-kube-api-access-k56mt\") pod \"coredns-6d4b75cb6d-jf4vd\" (UID: \"5328eb4a-ece7-4b89-86ac-98d9457fc35c\") " pod="kube-system/coredns-6d4b75cb6d-jf4vd"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.756268 825 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7gpk\" (UniqueName: \"kubernetes.io/projected/0f3bf85c-0267-4557-b1a5-32c94839d47b-kube-api-access-w7gpk\") pod \"kube-proxy-jklqc\" (UID: \"0f3bf85c-0267-4557-b1a5-32c94839d47b\") " pod="kube-system/kube-proxy-jklqc"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.756298 825 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fd3879a6-c3ca-4736-a73b-21c4d3409797-tmp\") pod \"storage-provisioner\" (UID: \"fd3879a6-c3ca-4736-a73b-21c4d3409797\") " pod="kube-system/storage-provisioner"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.756319 825 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f3bf85c-0267-4557-b1a5-32c94839d47b-xtables-lock\") pod \"kube-proxy-jklqc\" (UID: \"0f3bf85c-0267-4557-b1a5-32c94839d47b\") " pod="kube-system/kube-proxy-jklqc"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.756343 825 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5328eb4a-ece7-4b89-86ac-98d9457fc35c-config-volume\") pod \"coredns-6d4b75cb6d-jf4vd\" (UID: \"5328eb4a-ece7-4b89-86ac-98d9457fc35c\") " pod="kube-system/coredns-6d4b75cb6d-jf4vd"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.756365 825 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgj8x\" (UniqueName: \"kubernetes.io/projected/fd3879a6-c3ca-4736-a73b-21c4d3409797-kube-api-access-dgj8x\") pod \"storage-provisioner\" (UID: \"fd3879a6-c3ca-4736-a73b-21c4d3409797\") " pod="kube-system/storage-provisioner"
Jan 28 19:06:03 test-preload-872855 kubelet[825]: I0128 19:06:03.756384 825 reconciler.go:159] "Reconciler: start to sync state"
Jan 28 19:06:04 test-preload-872855 kubelet[825]: W0128 19:06:04.135618 825 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/8deffa0e-9251-4b57-b5e4-e1a3a5984f97/volumes/kubernetes.io~projected/kube-api-access-qq8n9: clearQuota called, but quotas disabled
Jan 28 19:06:04 test-preload-872855 kubelet[825]: I0128 19:06:04.135809 825 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8deffa0e-9251-4b57-b5e4-e1a3a5984f97-kube-api-access-qq8n9" (OuterVolumeSpecName: "kube-api-access-qq8n9") pod "8deffa0e-9251-4b57-b5e4-e1a3a5984f97" (UID: "8deffa0e-9251-4b57-b5e4-e1a3a5984f97"). InnerVolumeSpecName "kube-api-access-qq8n9". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 28 19:06:04 test-preload-872855 kubelet[825]: I0128 19:06:04.135316 825 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq8n9\" (UniqueName: \"kubernetes.io/projected/8deffa0e-9251-4b57-b5e4-e1a3a5984f97-kube-api-access-qq8n9\") pod \"8deffa0e-9251-4b57-b5e4-e1a3a5984f97\" (UID: \"8deffa0e-9251-4b57-b5e4-e1a3a5984f97\") "
Jan 28 19:06:04 test-preload-872855 kubelet[825]: I0128 19:06:04.136090 825 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8deffa0e-9251-4b57-b5e4-e1a3a5984f97-config-volume\") pod \"8deffa0e-9251-4b57-b5e4-e1a3a5984f97\" (UID: \"8deffa0e-9251-4b57-b5e4-e1a3a5984f97\") "
Jan 28 19:06:04 test-preload-872855 kubelet[825]: W0128 19:06:04.136322 825 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/8deffa0e-9251-4b57-b5e4-e1a3a5984f97/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
Jan 28 19:06:04 test-preload-872855 kubelet[825]: I0128 19:06:04.136857 825 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8deffa0e-9251-4b57-b5e4-e1a3a5984f97-config-volume" (OuterVolumeSpecName: "config-volume") pod "8deffa0e-9251-4b57-b5e4-e1a3a5984f97" (UID: "8deffa0e-9251-4b57-b5e4-e1a3a5984f97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jan 28 19:06:04 test-preload-872855 kubelet[825]: I0128 19:06:04.137619 825 reconciler.go:384] "Volume detached for volume \"kube-api-access-qq8n9\" (UniqueName: \"kubernetes.io/projected/8deffa0e-9251-4b57-b5e4-e1a3a5984f97-kube-api-access-qq8n9\") on node \"test-preload-872855\" DevicePath \"\""
Jan 28 19:06:04 test-preload-872855 kubelet[825]: I0128 19:06:04.137730 825 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8deffa0e-9251-4b57-b5e4-e1a3a5984f97-config-volume\") on node \"test-preload-872855\" DevicePath \"\""
Jan 28 19:06:07 test-preload-872855 kubelet[825]: I0128 19:06:07.667991 825 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8deffa0e-9251-4b57-b5e4-e1a3a5984f97 path="/var/lib/kubelet/pods/8deffa0e-9251-4b57-b5e4-e1a3a5984f97/volumes"
Jan 28 19:06:15 test-preload-872855 kubelet[825]: E0128 19:06:15.601429 825 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b1130a4378cb6a91bf011490ea07a45a3f0f1aa7698a1277573f9b4340050ee\": not found" containerID="2b1130a4378cb6a91bf011490ea07a45a3f0f1aa7698a1277573f9b4340050ee"
Jan 28 19:06:15 test-preload-872855 kubelet[825]: I0128 19:06:15.601470 825 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2b1130a4378cb6a91bf011490ea07a45a3f0f1aa7698a1277573f9b4340050ee" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b1130a4378cb6a91bf011490ea07a45a3f0f1aa7698a1277573f9b4340050ee\": not found"
Jan 28 19:06:36 test-preload-872855 kubelet[825]: I0128 19:06:36.011799 825 scope.go:110] "RemoveContainer" containerID="a3ffff28ae278eaab79982ea1ef72cd374fdb3187d81007247a6e62174f91bcb"
*
* ==> storage-provisioner [a3ffff28ae278eaab79982ea1ef72cd374fdb3187d81007247a6e62174f91bcb] <==
* I0128 19:06:05.471645 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0128 19:06:35.506220 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
*
* ==> storage-provisioner [cd24d8452d32b21eea5dececb00f7fca0f185fbb7119ff323b104a650862d3d2] <==
* I0128 19:06:36.134705 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0128 19:06:36.149204 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0128 19:06:36.149491 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-872855 -n test-preload-872855
helpers_test.go:261: (dbg) Run: kubectl --context test-preload-872855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-872855" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-872855
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-872855: (1.181045779s)
--- FAIL: TestPreload (309.27s)