=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-amd64 start -p old-k8s-version-985498 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.20.0
E0316 18:10:16.969161 788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:27.209849 788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-985498 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (7m23.08748036s)
-- stdout --
* [old-k8s-version-985498] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18277
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
* Using the kvm2 driver based on existing profile
* Starting "old-k8s-version-985498" primary control-plane node in "old-k8s-version-985498" cluster
* Restarting existing kvm2 VM for "old-k8s-version-985498" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.14 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-985498 addons enable metrics-server
* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
-- /stdout --
** stderr **
I0316 18:10:14.143143 838136 out.go:291] Setting OutFile to fd 1 ...
I0316 18:10:14.143493 838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 18:10:14.143506 838136 out.go:304] Setting ErrFile to fd 2...
I0316 18:10:14.143511 838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 18:10:14.143744 838136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
I0316 18:10:14.144360 838136 out.go:298] Setting JSON to false
I0316 18:10:14.145343 838136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":85961,"bootTime":1710526653,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0316 18:10:14.145423 838136 start.go:139] virtualization: kvm guest
I0316 18:10:14.147955 838136 out.go:177] * [old-k8s-version-985498] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0316 18:10:14.149608 838136 out.go:177] - MINIKUBE_LOCATION=18277
I0316 18:10:14.149671 838136 notify.go:220] Checking for updates...
I0316 18:10:14.151140 838136 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0316 18:10:14.152751 838136 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
I0316 18:10:14.154243 838136 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
I0316 18:10:14.155870 838136 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0316 18:10:14.157331 838136 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0316 18:10:14.159117 838136 config.go:182] Loaded profile config "old-k8s-version-985498": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0316 18:10:14.159586 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:10:14.159671 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:10:14.175490 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41697
I0316 18:10:14.175971 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:10:14.176543 838136 main.go:141] libmachine: Using API Version 1
I0316 18:10:14.176569 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:10:14.178134 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:10:14.178602 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:14.180531 838136 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
I0316 18:10:14.181797 838136 driver.go:392] Setting default libvirt URI to qemu:///system
I0316 18:10:14.182103 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:10:14.182156 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:10:14.197956 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
I0316 18:10:14.198416 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:10:14.199075 838136 main.go:141] libmachine: Using API Version 1
I0316 18:10:14.199106 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:10:14.199479 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:10:14.199712 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:14.238564 838136 out.go:177] * Using the kvm2 driver based on existing profile
I0316 18:10:14.239974 838136 start.go:297] selected driver: kvm2
I0316 18:10:14.240001 838136 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-985498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-985498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0316 18:10:14.240113 838136 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0316 18:10:14.240864 838136 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0316 18:10:14.240952 838136 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18277-781196/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0316 18:10:14.257576 838136 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
I0316 18:10:14.257978 838136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0316 18:10:14.258055 838136 cni.go:84] Creating CNI manager for ""
I0316 18:10:14.258069 838136 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0316 18:10:14.258140 838136 start.go:340] cluster config:
{Name:old-k8s-version-985498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-985498 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0316 18:10:14.258255 838136 iso.go:125] acquiring lock: {Name:mk48d016d8d435147389d59734ec7ed09e828db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0316 18:10:14.261041 838136 out.go:177] * Starting "old-k8s-version-985498" primary control-plane node in "old-k8s-version-985498" cluster
I0316 18:10:14.262777 838136 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0316 18:10:14.262860 838136 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
I0316 18:10:14.262878 838136 cache.go:56] Caching tarball of preloaded images
I0316 18:10:14.263029 838136 preload.go:173] Found /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0316 18:10:14.263065 838136 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0316 18:10:14.263201 838136 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/config.json ...
I0316 18:10:14.263459 838136 start.go:360] acquireMachinesLock for old-k8s-version-985498: {Name:mkf97f06937f9fa972ee38e81e5f88859912f65f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0316 18:10:20.013308 838136 start.go:364] duration metric: took 5.749789254s to acquireMachinesLock for "old-k8s-version-985498"
I0316 18:10:20.013370 838136 start.go:96] Skipping create...Using existing machine configuration
I0316 18:10:20.013379 838136 fix.go:54] fixHost starting:
I0316 18:10:20.013803 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:10:20.013858 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:10:20.031278 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43955
I0316 18:10:20.031799 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:10:20.032415 838136 main.go:141] libmachine: Using API Version 1
I0316 18:10:20.032442 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:10:20.032905 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:10:20.033170 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:20.033364 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
I0316 18:10:20.035302 838136 fix.go:112] recreateIfNeeded on old-k8s-version-985498: state=Stopped err=<nil>
I0316 18:10:20.035329 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
W0316 18:10:20.035499 838136 fix.go:138] unexpected machine state, will restart: <nil>
I0316 18:10:20.037420 838136 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-985498" ...
I0316 18:10:20.038678 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Start
I0316 18:10:20.038900 838136 main.go:141] libmachine: (old-k8s-version-985498) Ensuring networks are active...
I0316 18:10:20.039777 838136 main.go:141] libmachine: (old-k8s-version-985498) Ensuring network default is active
I0316 18:10:20.040326 838136 main.go:141] libmachine: (old-k8s-version-985498) Ensuring network mk-old-k8s-version-985498 is active
I0316 18:10:20.040810 838136 main.go:141] libmachine: (old-k8s-version-985498) Getting domain xml...
I0316 18:10:20.041632 838136 main.go:141] libmachine: (old-k8s-version-985498) Creating domain...
I0316 18:10:21.312095 838136 main.go:141] libmachine: (old-k8s-version-985498) Waiting to get IP...
I0316 18:10:21.313052 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:21.313576 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:21.313666 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:21.313555 838194 retry.go:31] will retry after 222.546171ms: waiting for machine to come up
I0316 18:10:21.538210 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:21.538853 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:21.538881 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:21.538822 838194 retry.go:31] will retry after 367.506447ms: waiting for machine to come up
I0316 18:10:21.908499 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:21.908979 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:21.909016 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:21.908938 838194 retry.go:31] will retry after 461.629269ms: waiting for machine to come up
I0316 18:10:22.372647 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:22.373108 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:22.373139 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:22.373064 838194 retry.go:31] will retry after 477.258709ms: waiting for machine to come up
I0316 18:10:22.851814 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:22.852392 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:22.852427 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:22.852331 838194 retry.go:31] will retry after 637.020571ms: waiting for machine to come up
I0316 18:10:23.491033 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:23.491555 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:23.491582 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:23.491505 838194 retry.go:31] will retry after 728.820234ms: waiting for machine to come up
I0316 18:10:24.222364 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:24.222915 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:24.222950 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:24.222859 838194 retry.go:31] will retry after 816.898868ms: waiting for machine to come up
I0316 18:10:25.041814 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:25.042283 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:25.042326 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:25.042230 838194 retry.go:31] will retry after 1.416019769s: waiting for machine to come up
I0316 18:10:26.460801 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:26.461519 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:26.461555 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:26.461451 838194 retry.go:31] will retry after 1.622056862s: waiting for machine to come up
I0316 18:10:28.086109 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:28.086687 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:28.086720 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:28.086622 838194 retry.go:31] will retry after 1.551263406s: waiting for machine to come up
I0316 18:10:29.640638 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:29.641271 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:29.641306 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:29.641207 838194 retry.go:31] will retry after 2.520185817s: waiting for machine to come up
I0316 18:10:32.162746 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:32.163393 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:32.163429 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:32.163353 838194 retry.go:31] will retry after 3.218166666s: waiting for machine to come up
I0316 18:10:35.382893 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:35.383526 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
I0316 18:10:35.383559 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:35.383435 838194 retry.go:31] will retry after 4.016596788s: waiting for machine to come up
I0316 18:10:39.404886 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.405368 838136 main.go:141] libmachine: (old-k8s-version-985498) Found IP for machine: 192.168.61.233
I0316 18:10:39.405395 838136 main.go:141] libmachine: (old-k8s-version-985498) Reserving static IP address...
I0316 18:10:39.405413 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has current primary IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.405989 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "old-k8s-version-985498", mac: "52:54:00:0d:b3:83", ip: "192.168.61.233"} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:39.406021 838136 main.go:141] libmachine: (old-k8s-version-985498) Reserved static IP address: 192.168.61.233
I0316 18:10:39.406042 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | skip adding static IP to network mk-old-k8s-version-985498 - found existing host DHCP lease matching {name: "old-k8s-version-985498", mac: "52:54:00:0d:b3:83", ip: "192.168.61.233"}
I0316 18:10:39.406053 838136 main.go:141] libmachine: (old-k8s-version-985498) Waiting for SSH to be available...
I0316 18:10:39.406068 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Getting to WaitForSSH function...
I0316 18:10:39.407992 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.408342 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:39.408371 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.408570 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Using SSH client type: external
I0316 18:10:39.408605 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Using SSH private key: /home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa (-rw-------)
I0316 18:10:39.408633 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa -p 22] /usr/bin/ssh <nil>}
I0316 18:10:39.408643 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | About to run SSH command:
I0316 18:10:39.408661 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | exit 0
I0316 18:10:39.536204 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | SSH cmd err, output: <nil>:
I0316 18:10:39.536645 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetConfigRaw
I0316 18:10:39.537326 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetIP
I0316 18:10:39.539731 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.540108 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:39.540150 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.540439 838136 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/config.json ...
I0316 18:10:39.540686 838136 machine.go:94] provisionDockerMachine start ...
I0316 18:10:39.540707 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:39.540985 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:39.543626 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.544120 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:39.544151 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.544228 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:10:39.544434 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:39.544600 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:39.544778 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:10:39.545027 838136 main.go:141] libmachine: Using SSH client type: native
I0316 18:10:39.545288 838136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil> [] 0s} 192.168.61.233 22 <nil> <nil>}
I0316 18:10:39.545303 838136 main.go:141] libmachine: About to run SSH command:
hostname
I0316 18:10:39.660751 838136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0316 18:10:39.660794 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetMachineName
I0316 18:10:39.661098 838136 buildroot.go:166] provisioning hostname "old-k8s-version-985498"
I0316 18:10:39.661127 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetMachineName
I0316 18:10:39.661364 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:39.664277 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.664759 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:39.664795 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.664989 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:10:39.665210 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:39.665386 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:39.665541 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:10:39.665720 838136 main.go:141] libmachine: Using SSH client type: native
I0316 18:10:39.665961 838136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil> [] 0s} 192.168.61.233 22 <nil> <nil>}
I0316 18:10:39.665977 838136 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-985498 && echo "old-k8s-version-985498" | sudo tee /etc/hostname
I0316 18:10:39.797378 838136 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-985498
I0316 18:10:39.797416 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:39.800557 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.800933 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:39.800985 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.801139 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:10:39.801364 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:39.801559 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:39.801731 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:10:39.801905 838136 main.go:141] libmachine: Using SSH client type: native
I0316 18:10:39.802103 838136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil> [] 0s} 192.168.61.233 22 <nil> <nil>}
I0316 18:10:39.802120 838136 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-985498' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-985498/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-985498' | sudo tee -a /etc/hosts;
fi
fi
I0316 18:10:39.926528 838136 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0316 18:10:39.926563 838136 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18277-781196/.minikube CaCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18277-781196/.minikube}
I0316 18:10:39.926596 838136 buildroot.go:174] setting up certificates
I0316 18:10:39.926612 838136 provision.go:84] configureAuth start
I0316 18:10:39.926626 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetMachineName
I0316 18:10:39.926990 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetIP
I0316 18:10:39.930056 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.930467 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:39.930501 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.930679 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:39.933530 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.933907 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:39.933935 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:39.934073 838136 provision.go:143] copyHostCerts
I0316 18:10:39.934174 838136 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem, removing ...
I0316 18:10:39.934194 838136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem
I0316 18:10:39.934270 838136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem (1082 bytes)
I0316 18:10:39.934462 838136 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem, removing ...
I0316 18:10:39.934480 838136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem
I0316 18:10:39.934519 838136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem (1123 bytes)
I0316 18:10:39.934606 838136 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem, removing ...
I0316 18:10:39.934617 838136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem
I0316 18:10:39.934644 838136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem (1675 bytes)
I0316 18:10:39.934713 838136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-985498 san=[127.0.0.1 192.168.61.233 localhost minikube old-k8s-version-985498]
I0316 18:10:40.111602 838136 provision.go:177] copyRemoteCerts
I0316 18:10:40.111688 838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0316 18:10:40.111725 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:40.114815 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.115275 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:40.115317 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.115536 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:10:40.115770 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:40.115974 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:10:40.116126 838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
I0316 18:10:40.213547 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0316 18:10:40.245020 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0316 18:10:40.278286 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0316 18:10:40.310385 838136 provision.go:87] duration metric: took 383.757716ms to configureAuth
I0316 18:10:40.310424 838136 buildroot.go:189] setting minikube options for container-runtime
I0316 18:10:40.310620 838136 config.go:182] Loaded profile config "old-k8s-version-985498": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0316 18:10:40.310632 838136 machine.go:97] duration metric: took 769.932485ms to provisionDockerMachine
I0316 18:10:40.310641 838136 start.go:293] postStartSetup for "old-k8s-version-985498" (driver="kvm2")
I0316 18:10:40.310650 838136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0316 18:10:40.310685 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:40.311113 838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0316 18:10:40.311153 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:40.313816 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.314242 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:40.314273 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.314463 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:10:40.314713 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:40.314895 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:10:40.315042 838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
I0316 18:10:40.403815 838136 ssh_runner.go:195] Run: cat /etc/os-release
I0316 18:10:40.409451 838136 info.go:137] Remote host: Buildroot 2023.02.9
I0316 18:10:40.409493 838136 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-781196/.minikube/addons for local assets ...
I0316 18:10:40.409577 838136 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-781196/.minikube/files for local assets ...
I0316 18:10:40.409678 838136 filesync.go:149] local asset: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem -> 7884422.pem in /etc/ssl/certs
I0316 18:10:40.409770 838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0316 18:10:40.421303 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem --> /etc/ssl/certs/7884422.pem (1708 bytes)
I0316 18:10:40.452568 838136 start.go:296] duration metric: took 141.910752ms for postStartSetup
I0316 18:10:40.452624 838136 fix.go:56] duration metric: took 20.439246626s for fixHost
I0316 18:10:40.452650 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:40.455622 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.456038 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:40.456075 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.456316 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:10:40.456559 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:40.456763 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:40.456999 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:10:40.457227 838136 main.go:141] libmachine: Using SSH client type: native
I0316 18:10:40.457479 838136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil> [] 0s} 192.168.61.233 22 <nil> <nil>}
I0316 18:10:40.457498 838136 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0316 18:10:40.573393 838136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710612640.549571184
I0316 18:10:40.573420 838136 fix.go:216] guest clock: 1710612640.549571184
I0316 18:10:40.573430 838136 fix.go:229] Guest: 2024-03-16 18:10:40.549571184 +0000 UTC Remote: 2024-03-16 18:10:40.452629594 +0000 UTC m=+26.360717773 (delta=96.94159ms)
I0316 18:10:40.573489 838136 fix.go:200] guest clock delta is within tolerance: 96.94159ms
I0316 18:10:40.573501 838136 start.go:83] releasing machines lock for "old-k8s-version-985498", held for 20.560153338s
I0316 18:10:40.573547 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:40.573911 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetIP
I0316 18:10:40.577073 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.577471 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:40.577504 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.577730 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:40.578282 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:40.578505 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:10:40.578650 838136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0316 18:10:40.578701 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:40.578767 838136 ssh_runner.go:195] Run: cat /version.json
I0316 18:10:40.578795 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:10:40.581653 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.581938 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.582103 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:40.582135 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.582407 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:10:40.582409 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:40.582485 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:40.582636 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:10:40.582644 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:40.582931 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:10:40.582931 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:10:40.583100 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:10:40.583109 838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
I0316 18:10:40.583245 838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
I0316 18:10:40.669901 838136 ssh_runner.go:195] Run: systemctl --version
I0316 18:10:40.699529 838136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0316 18:10:40.707058 838136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0316 18:10:40.707154 838136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0316 18:10:40.730239 838136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0316 18:10:40.730271 838136 start.go:494] detecting cgroup driver to use...
I0316 18:10:40.730364 838136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0316 18:10:40.761933 838136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0316 18:10:40.781984 838136 docker.go:217] disabling cri-docker service (if available) ...
I0316 18:10:40.782061 838136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0316 18:10:40.801506 838136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0316 18:10:40.819340 838136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0316 18:10:40.969263 838136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0316 18:10:41.151776 838136 docker.go:233] disabling docker service ...
I0316 18:10:41.151862 838136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0316 18:10:41.170046 838136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0316 18:10:41.186577 838136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0316 18:10:41.320488 838136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0316 18:10:41.451266 838136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0316 18:10:41.472978 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0316 18:10:41.504957 838136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0316 18:10:41.520192 838136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0316 18:10:41.534403 838136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0316 18:10:41.534478 838136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0316 18:10:41.549329 838136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0316 18:10:41.564261 838136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0316 18:10:41.578801 838136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0316 18:10:41.593218 838136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0316 18:10:41.608880 838136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0316 18:10:41.624269 838136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0316 18:10:41.638565 838136 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0316 18:10:41.638657 838136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0316 18:10:41.658517 838136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0316 18:10:41.673552 838136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0316 18:10:41.835260 838136 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0316 18:10:41.871243 838136 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
I0316 18:10:41.871346 838136 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0316 18:10:41.879650 838136 retry.go:31] will retry after 585.266083ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0316 18:10:42.465241 838136 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0316 18:10:42.471699 838136 start.go:562] Will wait 60s for crictl version
I0316 18:10:42.471794 838136 ssh_runner.go:195] Run: which crictl
I0316 18:10:42.477964 838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0316 18:10:42.526073 838136 start.go:578] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.14
RuntimeApiVersion: v1
I0316 18:10:42.526153 838136 ssh_runner.go:195] Run: containerd --version
I0316 18:10:42.560338 838136 ssh_runner.go:195] Run: containerd --version
I0316 18:10:42.593533 838136 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.14 ...
I0316 18:10:42.595003 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetIP
I0316 18:10:42.598356 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:42.598926 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:10:42.598994 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:10:42.599201 838136 ssh_runner.go:195] Run: grep 192.168.61.1 host.minikube.internal$ /etc/hosts
I0316 18:10:42.606182 838136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0316 18:10:42.625976 838136 kubeadm.go:877] updating cluster {Name:old-k8s-version-985498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-985498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0316 18:10:42.626141 838136 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0316 18:10:42.626223 838136 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 18:10:42.669448 838136 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
I0316 18:10:42.669536 838136 ssh_runner.go:195] Run: which lz4
I0316 18:10:42.674827 838136 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0316 18:10:42.680325 838136 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0316 18:10:42.680366 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (472503869 bytes)
I0316 18:10:44.949609 838136 containerd.go:548] duration metric: took 2.274832755s to copy over tarball
I0316 18:10:44.949734 838136 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0316 18:10:48.512412 838136 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.56263958s)
I0316 18:10:48.512448 838136 containerd.go:555] duration metric: took 3.562786414s to extract the tarball
I0316 18:10:48.512460 838136 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0316 18:10:48.576915 838136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0316 18:10:48.715869 838136 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0316 18:10:48.754638 838136 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 18:10:48.820562 838136 retry.go:31] will retry after 253.219113ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2024-03-16T18:10:48Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0316 18:10:49.074051 838136 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 18:10:49.121260 838136 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
I0316 18:10:49.121296 838136 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
I0316 18:10:49.121430 838136 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
I0316 18:10:49.121429 838136 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0316 18:10:49.121429 838136 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
I0316 18:10:49.121520 838136 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0316 18:10:49.121520 838136 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
I0316 18:10:49.121525 838136 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
I0316 18:10:49.121729 838136 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
I0316 18:10:49.121449 838136 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
I0316 18:10:49.123357 838136 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0316 18:10:49.123624 838136 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
I0316 18:10:49.123660 838136 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
I0316 18:10:49.123687 838136 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
I0316 18:10:49.123781 838136 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0316 18:10:49.123623 838136 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
I0316 18:10:49.123627 838136 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
I0316 18:10:49.123895 838136 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
I0316 18:10:49.292076 838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.13-0" and sha "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
I0316 18:10:49.292145 838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0316 18:10:49.315743 838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.20.0" and sha "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
I0316 18:10:49.315853 838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0316 18:10:49.316284 838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.7.0" and sha "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
I0316 18:10:49.316353 838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0316 18:10:49.335326 838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
I0316 18:10:49.335410 838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0316 18:10:49.351065 838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.20.0" and sha "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
I0316 18:10:49.351144 838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0316 18:10:49.353042 838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.20.0" and sha "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
I0316 18:10:49.353130 838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0316 18:10:49.382901 838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.20.0" and sha "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
I0316 18:10:49.382999 838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0316 18:10:49.613076 838136 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
I0316 18:10:49.613213 838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0316 18:10:50.173021 838136 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
I0316 18:10:50.173118 838136 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
I0316 18:10:50.173039 838136 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
I0316 18:10:50.173235 838136 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
I0316 18:10:50.173181 838136 ssh_runner.go:195] Run: which crictl
I0316 18:10:50.173288 838136 ssh_runner.go:195] Run: which crictl
I0316 18:10:50.369202 838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.052814345s)
I0316 18:10:50.369298 838136 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
I0316 18:10:50.369376 838136 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
I0316 18:10:50.369445 838136 ssh_runner.go:195] Run: which crictl
I0316 18:10:50.846406 838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.510962988s)
I0316 18:10:50.846482 838136 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0316 18:10:50.846523 838136 cri.go:218] Removing image: registry.k8s.io/pause:3.2
I0316 18:10:50.846578 838136 ssh_runner.go:195] Run: which crictl
I0316 18:10:50.955793 838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.604615227s)
I0316 18:10:50.955872 838136 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
I0316 18:10:50.955922 838136 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
I0316 18:10:50.955979 838136 ssh_runner.go:195] Run: which crictl
I0316 18:10:50.956009 838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.602852446s)
I0316 18:10:50.956074 838136 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
I0316 18:10:50.956114 838136 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
I0316 18:10:50.956160 838136 ssh_runner.go:195] Run: which crictl
I0316 18:10:50.956553 838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.573535089s)
I0316 18:10:50.956605 838136 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
I0316 18:10:50.956639 838136 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
I0316 18:10:50.956689 838136 ssh_runner.go:195] Run: which crictl
I0316 18:10:50.968854 838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.355597673s)
I0316 18:10:50.969003 838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
I0316 18:10:50.969118 838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
I0316 18:10:50.969024 838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
I0316 18:10:50.969047 838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
I0316 18:10:50.969290 838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
I0316 18:10:50.974720 838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
I0316 18:10:50.974812 838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
I0316 18:10:51.167022 838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
I0316 18:10:51.167036 838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
I0316 18:10:51.167035 838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0316 18:10:51.167121 838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
I0316 18:10:51.167166 838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
I0316 18:10:51.171034 838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
I0316 18:10:51.171105 838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
I0316 18:10:51.171167 838136 cache_images.go:92] duration metric: took 2.049852434s to LoadCachedImages
W0316 18:10:51.171235 838136 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
I0316 18:10:51.171248 838136 kubeadm.go:928] updating node { 192.168.61.233 8443 v1.20.0 containerd true true} ...
I0316 18:10:51.171417 838136 kubeadm.go:940] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-985498 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.233
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-985498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0316 18:10:51.171507 838136 ssh_runner.go:195] Run: sudo crictl info
I0316 18:10:51.211690 838136 cni.go:84] Creating CNI manager for ""
I0316 18:10:51.211724 838136 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0316 18:10:51.211740 838136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0316 18:10:51.211767 838136 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.233 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-985498 NodeName:old-k8s-version-985498 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0316 18:10:51.211984 838136 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.61.233
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-985498"
kubeletExtraArgs:
node-ip: 192.168.61.233
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.61.233"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0316 18:10:51.212083 838136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0316 18:10:51.228556 838136 binaries.go:44] Found k8s binaries, skipping transfer
I0316 18:10:51.228674 838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0316 18:10:51.243247 838136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (444 bytes)
I0316 18:10:51.269296 838136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0316 18:10:51.294856 838136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
I0316 18:10:51.318596 838136 ssh_runner.go:195] Run: grep 192.168.61.233 control-plane.minikube.internal$ /etc/hosts
I0316 18:10:51.324332 838136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.233 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0316 18:10:51.343249 838136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0316 18:10:51.481783 838136 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0316 18:10:51.510038 838136 certs.go:68] Setting up /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498 for IP: 192.168.61.233
I0316 18:10:51.510076 838136 certs.go:194] generating shared ca certs ...
I0316 18:10:51.510102 838136 certs.go:226] acquiring lock for ca certs: {Name:mk0c50354a81ee6e126f21f3d5a16214134194fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0316 18:10:51.510322 838136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18277-781196/.minikube/ca.key
I0316 18:10:51.510398 838136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.key
I0316 18:10:51.510416 838136 certs.go:256] generating profile certs ...
I0316 18:10:51.510563 838136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/client.key
I0316 18:10:51.510652 838136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/apiserver.key.39495394
I0316 18:10:51.510708 838136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/proxy-client.key
I0316 18:10:51.510895 838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442.pem (1338 bytes)
W0316 18:10:51.510939 838136 certs.go:480] ignoring /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442_empty.pem, impossibly tiny 0 bytes
I0316 18:10:51.510947 838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem (1679 bytes)
I0316 18:10:51.510974 838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem (1082 bytes)
I0316 18:10:51.511006 838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem (1123 bytes)
I0316 18:10:51.511042 838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem (1675 bytes)
I0316 18:10:51.511102 838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem (1708 bytes)
I0316 18:10:51.512190 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0316 18:10:51.570699 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0316 18:10:51.611800 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0316 18:10:51.659890 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0316 18:10:51.709400 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0316 18:10:51.755499 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0316 18:10:51.812896 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0316 18:10:51.845974 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0316 18:10:51.879055 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0316 18:10:51.916045 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442.pem --> /usr/share/ca-certificates/788442.pem (1338 bytes)
I0316 18:10:51.950923 838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem --> /usr/share/ca-certificates/7884422.pem (1708 bytes)
I0316 18:10:51.983369 838136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0316 18:10:52.009024 838136 ssh_runner.go:195] Run: openssl version
I0316 18:10:52.016900 838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0316 18:10:52.033483 838136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0316 18:10:52.039694 838136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 16 16:56 /usr/share/ca-certificates/minikubeCA.pem
I0316 18:10:52.039802 838136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0316 18:10:52.047286 838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0316 18:10:52.063453 838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/788442.pem && ln -fs /usr/share/ca-certificates/788442.pem /etc/ssl/certs/788442.pem"
I0316 18:10:52.079354 838136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/788442.pem
I0316 18:10:52.085657 838136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 16 17:02 /usr/share/ca-certificates/788442.pem
I0316 18:10:52.085721 838136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/788442.pem
I0316 18:10:52.093263 838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/788442.pem /etc/ssl/certs/51391683.0"
I0316 18:10:52.108530 838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7884422.pem && ln -fs /usr/share/ca-certificates/7884422.pem /etc/ssl/certs/7884422.pem"
I0316 18:10:52.124106 838136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7884422.pem
I0316 18:10:52.131740 838136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 16 17:02 /usr/share/ca-certificates/7884422.pem
I0316 18:10:52.131825 838136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7884422.pem
I0316 18:10:52.141047 838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7884422.pem /etc/ssl/certs/3ec20f2e.0"
I0316 18:10:52.157549 838136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0316 18:10:52.165808 838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0316 18:10:52.173668 838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0316 18:10:52.183767 838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0316 18:10:52.193964 838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0316 18:10:52.204458 838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0316 18:10:52.214907 838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0316 18:10:52.223094 838136 kubeadm.go:391] StartCluster: {Name:old-k8s-version-985498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-985498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0316 18:10:52.223233 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0316 18:10:52.223368 838136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0316 18:10:52.283104 838136 cri.go:89] found id: ""
I0316 18:10:52.283208 838136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
W0316 18:10:52.297855 838136 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
I0316 18:10:52.297885 838136 kubeadm.go:407] found existing configuration files, will attempt cluster restart
I0316 18:10:52.297892 838136 kubeadm.go:587] restartPrimaryControlPlane start ...
I0316 18:10:52.297948 838136 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0316 18:10:52.312007 838136 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0316 18:10:52.312741 838136 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-985498" does not appear in /home/jenkins/minikube-integration/18277-781196/kubeconfig
I0316 18:10:52.313164 838136 kubeconfig.go:62] /home/jenkins/minikube-integration/18277-781196/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-985498" cluster setting kubeconfig missing "old-k8s-version-985498" context setting]
I0316 18:10:52.313996 838136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-781196/kubeconfig: {Name:mke76908283b58e263a226954335fd60fd02692a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0316 18:10:52.315560 838136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0316 18:10:52.328791 838136 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.233
I0316 18:10:52.328841 838136 kubeadm.go:1154] stopping kube-system containers ...
I0316 18:10:52.328860 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0316 18:10:52.328936 838136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0316 18:10:52.384396 838136 cri.go:89] found id: ""
I0316 18:10:52.384490 838136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0316 18:10:52.405530 838136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0316 18:10:52.422845 838136 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0316 18:10:52.422874 838136 kubeadm.go:156] found existing configuration files:
I0316 18:10:52.422931 838136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0316 18:10:52.435759 838136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0316 18:10:52.435862 838136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0316 18:10:52.448728 838136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0316 18:10:52.463228 838136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0316 18:10:52.463318 838136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0316 18:10:52.476194 838136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0316 18:10:52.488899 838136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0316 18:10:52.488997 838136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0316 18:10:52.502754 838136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0316 18:10:52.519699 838136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0316 18:10:52.519801 838136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0316 18:10:52.537443 838136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0316 18:10:52.555161 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:10:52.726314 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:10:53.471844 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:10:53.737175 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:10:53.847785 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:10:53.967192 838136 api_server.go:52] waiting for apiserver process to appear ...
I0316 18:10:53.967378 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:54.468173 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:54.967746 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:55.467902 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:55.968049 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:56.467610 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:56.968426 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:57.467602 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:57.967524 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:58.468280 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:58.968219 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:59.467869 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:10:59.968099 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:00.467595 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:00.968048 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:01.467398 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:01.968323 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:02.467993 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:02.967635 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:03.467602 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:03.967580 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:04.468074 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:04.968250 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:05.467376 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:05.967683 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:06.468018 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:06.967572 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:07.468059 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:07.967500 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:08.467656 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:08.967734 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:09.467594 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:09.968197 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:10.467605 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:10.967628 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:11.467363 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:11.967611 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:12.468445 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:12.968106 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:13.467411 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:13.968224 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:14.467977 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:14.967979 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:15.468293 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:15.968081 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:16.468180 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:16.968339 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:17.468090 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:17.968057 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:18.467469 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:18.968180 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:19.468133 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:19.967667 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:20.467601 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:20.968051 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:21.468076 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:21.967628 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:22.467801 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:22.967632 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:23.467946 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:23.968421 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:24.468452 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:24.968223 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:25.468353 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:25.967603 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:26.468242 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:26.967430 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:27.467842 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:27.967560 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:28.467586 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:28.967716 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:11:28.984523 838136 api_server.go:72] duration metric: took 35.017328517s to wait for apiserver process to appear ...
I0316 18:11:28.984560 838136 api_server.go:88] waiting for apiserver healthz status ...
I0316 18:11:28.984607 838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
I0316 18:11:32.870510 838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0316 18:11:32.870552 838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0316 18:11:32.870575 838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
I0316 18:11:32.913992 838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0316 18:11:32.914029 838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0316 18:11:32.985178 838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
I0316 18:11:33.052130 838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0316 18:11:33.052184 838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0316 18:11:33.485698 838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
I0316 18:11:33.492841 838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0316 18:11:33.492885 838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0316 18:11:33.985533 838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
I0316 18:11:34.009045 838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0316 18:11:34.009085 838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0316 18:11:34.485324 838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
I0316 18:11:34.493652 838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 200:
ok
I0316 18:11:34.503212 838136 api_server.go:141] control plane version: v1.20.0
I0316 18:11:34.503250 838136 api_server.go:131] duration metric: took 5.518681043s to wait for apiserver health ...
I0316 18:11:34.503263 838136 cni.go:84] Creating CNI manager for ""
I0316 18:11:34.503272 838136 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0316 18:11:34.504811 838136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0316 18:11:34.506291 838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0316 18:11:34.526346 838136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0316 18:11:34.557313 838136 system_pods.go:43] waiting for kube-system pods to appear ...
I0316 18:11:34.567657 838136 system_pods.go:59] 8 kube-system pods found
I0316 18:11:34.567715 838136 system_pods.go:61] "coredns-74ff55c5b-p8874" [e9f21303-b312-4077-8cdc-aa1f38acf881] Running
I0316 18:11:34.567724 838136 system_pods.go:61] "etcd-old-k8s-version-985498" [2d58d97d-a406-4bdf-98f3-7456be608d31] Running
I0316 18:11:34.567730 838136 system_pods.go:61] "kube-apiserver-old-k8s-version-985498" [515faf17-7382-4227-8a1c-d9d7f40dd40b] Running
I0316 18:11:34.567741 838136 system_pods.go:61] "kube-controller-manager-old-k8s-version-985498" [e2f7c70f-6441-4b0d-914f-22fbea47af98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0316 18:11:34.567760 838136 system_pods.go:61] "kube-proxy-nvd4k" [daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36] Running
I0316 18:11:34.567766 838136 system_pods.go:61] "kube-scheduler-old-k8s-version-985498" [197c4d67-dd09-4cfd-91b5-9cfbadab76dc] Running
I0316 18:11:34.567771 838136 system_pods.go:61] "metrics-server-9975d5f86-xqhk9" [ba5c6fa2-191f-4ae2-8aee-b1075a50b37b] Pending
I0316 18:11:34.567774 838136 system_pods.go:61] "storage-provisioner" [d89b271f-838a-4592-b128-fcb2a06fc5e9] Running
I0316 18:11:34.567782 838136 system_pods.go:74] duration metric: took 10.438526ms to wait for pod list to return data ...
I0316 18:11:34.567800 838136 node_conditions.go:102] verifying NodePressure condition ...
I0316 18:11:34.581203 838136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0316 18:11:34.581237 838136 node_conditions.go:123] node cpu capacity is 2
I0316 18:11:34.581250 838136 node_conditions.go:105] duration metric: took 13.443606ms to run NodePressure ...
I0316 18:11:34.581319 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:11:34.942383 838136 kubeadm.go:718] waiting for restarted kubelet to initialise ...
I0316 18:11:34.950961 838136 kubeadm.go:733] kubelet initialised
I0316 18:11:34.950999 838136 kubeadm.go:734] duration metric: took 8.586934ms waiting for restarted kubelet to initialise ...
I0316 18:11:34.951010 838136 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0316 18:11:34.962246 838136 pod_ready.go:78] waiting up to 4m0s for pod "coredns-74ff55c5b-p8874" in "kube-system" namespace to be "Ready" ...
I0316 18:11:34.974731 838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "coredns-74ff55c5b-p8874" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:34.974773 838136 pod_ready.go:81] duration metric: took 12.48904ms for pod "coredns-74ff55c5b-p8874" in "kube-system" namespace to be "Ready" ...
E0316 18:11:34.974788 838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "coredns-74ff55c5b-p8874" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:34.974798 838136 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:11:34.981823 838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "etcd-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:34.981862 838136 pod_ready.go:81] duration metric: took 7.047238ms for pod "etcd-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
E0316 18:11:34.981877 838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "etcd-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:34.981886 838136 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:11:34.995159 838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:34.995193 838136 pod_ready.go:81] duration metric: took 13.296838ms for pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
E0316 18:11:34.995202 838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:34.995210 838136 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:11:35.001459 838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:35.001499 838136 pod_ready.go:81] duration metric: took 6.27941ms for pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
E0316 18:11:35.001514 838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:35.001525 838136 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nvd4k" in "kube-system" namespace to be "Ready" ...
I0316 18:11:35.361513 838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "kube-proxy-nvd4k" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:35.361550 838136 pod_ready.go:81] duration metric: took 360.016182ms for pod "kube-proxy-nvd4k" in "kube-system" namespace to be "Ready" ...
E0316 18:11:35.361564 838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "kube-proxy-nvd4k" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:35.361573 838136 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:11:35.762838 838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:35.762878 838136 pod_ready.go:81] duration metric: took 401.293557ms for pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
E0316 18:11:35.762891 838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:35.762901 838136 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
I0316 18:11:36.161627 838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:36.161683 838136 pod_ready.go:81] duration metric: took 398.769929ms for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
E0316 18:11:36.161697 838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
I0316 18:11:36.161707 838136 pod_ready.go:38] duration metric: took 1.210684392s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0316 18:11:36.161732 838136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0316 18:11:36.176854 838136 ops.go:34] apiserver oom_adj: -16
I0316 18:11:36.176886 838136 kubeadm.go:591] duration metric: took 43.878986103s to restartPrimaryControlPlane
I0316 18:11:36.176899 838136 kubeadm.go:393] duration metric: took 43.953820603s to StartCluster
I0316 18:11:36.176925 838136 settings.go:142] acquiring lock: {Name:mk5e1e3433840176063e5baa5db7056716046a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0316 18:11:36.177083 838136 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/18277-781196/kubeconfig
I0316 18:11:36.178481 838136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-781196/kubeconfig: {Name:mke76908283b58e263a226954335fd60fd02692a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0316 18:11:36.178774 838136 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0316 18:11:36.180502 838136 out.go:177] * Verifying Kubernetes components...
I0316 18:11:36.178867 838136 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
I0316 18:11:36.179001 838136 config.go:182] Loaded profile config "old-k8s-version-985498": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0316 18:11:36.182040 838136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0316 18:11:36.180607 838136 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-985498"
I0316 18:11:36.182129 838136 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-985498"
W0316 18:11:36.182149 838136 addons.go:243] addon storage-provisioner should already be in state true
I0316 18:11:36.180618 838136 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-985498"
I0316 18:11:36.182203 838136 host.go:66] Checking if "old-k8s-version-985498" exists ...
I0316 18:11:36.182220 838136 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-985498"
W0316 18:11:36.182237 838136 addons.go:243] addon metrics-server should already be in state true
I0316 18:11:36.182277 838136 host.go:66] Checking if "old-k8s-version-985498" exists ...
I0316 18:11:36.180620 838136 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-985498"
I0316 18:11:36.182380 838136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-985498"
I0316 18:11:36.180611 838136 addons.go:69] Setting dashboard=true in profile "old-k8s-version-985498"
I0316 18:11:36.182474 838136 addons.go:234] Setting addon dashboard=true in "old-k8s-version-985498"
W0316 18:11:36.182488 838136 addons.go:243] addon dashboard should already be in state true
I0316 18:11:36.182514 838136 host.go:66] Checking if "old-k8s-version-985498" exists ...
I0316 18:11:36.182699 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.182717 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.182734 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.182747 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.182751 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.182755 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.183027 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.183052 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.200932 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
I0316 18:11:36.201436 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.202011 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.202040 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.202468 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.202986 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.203022 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.204619 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
I0316 18:11:36.205064 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.205611 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.205629 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.205992 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.206210 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
I0316 18:11:36.209087 838136 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-985498"
W0316 18:11:36.209109 838136 addons.go:243] addon default-storageclass should already be in state true
I0316 18:11:36.209139 838136 host.go:66] Checking if "old-k8s-version-985498" exists ...
I0316 18:11:36.209413 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.209449 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.221952 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
I0316 18:11:36.222542 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.223131 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.223167 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.223617 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.223833 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
I0316 18:11:36.226013 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:11:36.228536 838136 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0316 18:11:36.230082 838136 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0316 18:11:36.230114 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0316 18:11:36.230150 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:11:36.234118 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:11:36.234161 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
I0316 18:11:36.234343 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
I0316 18:11:36.234769 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.234886 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.235597 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.235613 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.235675 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:11:36.235688 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:11:36.235775 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.235782 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.236050 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.236114 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:11:36.236237 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:11:36.236276 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.236445 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:11:36.236691 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.236739 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.236781 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.236798 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.237065 838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
I0316 18:11:36.241687 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
I0316 18:11:36.242296 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.242865 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.242884 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.243348 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.243986 838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:11:36.244029 838136 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:11:36.259433 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33197
I0316 18:11:36.260193 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.260357 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
I0316 18:11:36.260722 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.260954 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.260974 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.261212 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.261233 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.261619 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.261729 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.262042 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
I0316 18:11:36.262194 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
I0316 18:11:36.264661 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:11:36.264741 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:11:36.266992 838136 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0316 18:11:36.265746 838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33845
I0316 18:11:36.272723 838136 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0316 18:11:36.271366 838136 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0316 18:11:36.272207 838136 main.go:141] libmachine: () Calling .GetVersion
I0316 18:11:36.274209 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0316 18:11:36.274233 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0316 18:11:36.274263 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:11:36.274877 838136 main.go:141] libmachine: Using API Version 1
I0316 18:11:36.275899 838136 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:11:36.275919 838136 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0316 18:11:36.275941 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0316 18:11:36.275967 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:11:36.276574 838136 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:11:36.276891 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
I0316 18:11:36.277922 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:11:36.278494 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:11:36.278530 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:11:36.278688 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:11:36.278869 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:11:36.279042 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:11:36.279221 838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
I0316 18:11:36.280355 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
I0316 18:11:36.280673 838136 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
I0316 18:11:36.280698 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0316 18:11:36.280718 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
I0316 18:11:36.281348 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:11:36.281775 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:11:36.281803 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:11:36.281972 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:11:36.282163 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:11:36.282315 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:11:36.282453 838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
I0316 18:11:36.286939 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
I0316 18:11:36.286962 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:11:36.286993 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
I0316 18:11:36.287015 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
I0316 18:11:36.287249 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
I0316 18:11:36.287468 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
I0316 18:11:36.287655 838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
I0316 18:11:36.395392 838136 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0316 18:11:36.418891 838136 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-985498" to be "Ready" ...
I0316 18:11:36.491432 838136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0316 18:11:36.491479 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0316 18:11:36.517572 838136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0316 18:11:36.517605 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0316 18:11:36.521428 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0316 18:11:36.521456 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0316 18:11:36.562163 838136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0316 18:11:36.562207 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0316 18:11:36.574387 838136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0316 18:11:36.579373 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0316 18:11:36.579406 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0316 18:11:36.589252 838136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0316 18:11:36.632946 838136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0316 18:11:36.636515 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0316 18:11:36.636541 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0316 18:11:36.734664 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0316 18:11:36.734698 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0316 18:11:36.851243 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
I0316 18:11:36.851276 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0316 18:11:37.123790 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0316 18:11:37.123831 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0316 18:11:37.211386 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0316 18:11:37.211428 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0316 18:11:37.257734 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0316 18:11:37.257773 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0316 18:11:37.326672 838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0316 18:11:37.326704 838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0316 18:11:37.428817 838136 node_ready.go:49] node "old-k8s-version-985498" has status "Ready":"True"
I0316 18:11:37.428860 838136 node_ready.go:38] duration metric: took 1.009919806s for node "old-k8s-version-985498" to be "Ready" ...
I0316 18:11:37.428875 838136 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0316 18:11:37.449747 838136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-p8874" in "kube-system" namespace to be "Ready" ...
I0316 18:11:37.485833 838136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0316 18:11:37.593660 838136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.019186178s)
I0316 18:11:37.593823 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:37.593879 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:37.594312 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:37.594382 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:37.594398 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:37.594409 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:37.594317 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
I0316 18:11:37.594720 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:37.594743 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:37.603646 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:37.603751 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:37.604250 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:37.604271 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:37.604286 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
I0316 18:11:37.814018 838136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224723801s)
I0316 18:11:37.814092 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:37.814108 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:37.814160 838136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.181164264s)
I0316 18:11:37.814210 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:37.814226 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:37.814831 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
I0316 18:11:37.814840 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:37.814855 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:37.814865 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:37.814875 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:37.814907 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:37.814929 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:37.814938 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:37.814951 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:37.815318 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
I0316 18:11:37.815352 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:37.815359 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:37.815369 838136 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-985498"
I0316 18:11:37.815465 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
I0316 18:11:37.815504 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:37.815521 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:38.290554 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:38.290594 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:38.290992 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:38.291014 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:38.291021 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
I0316 18:11:38.291029 838136 main.go:141] libmachine: Making call to close driver server
I0316 18:11:38.291042 838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
I0316 18:11:38.291316 838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
I0316 18:11:38.291358 838136 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:11:38.291366 838136 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:11:38.293371 838136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-985498 addons enable metrics-server
I0316 18:11:38.295184 838136 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I0316 18:11:38.296764 838136 addons.go:505] duration metric: took 2.117899672s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I0316 18:11:39.457709 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:11:41.458339 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:11:43.957814 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:11:46.457217 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:11:48.958059 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:11:51.458586 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:11:53.460935 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:11:55.957540 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:11:57.958656 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:00.457479 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:02.458161 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:04.458359 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:06.458933 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:08.958770 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:11.457997 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:13.460097 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:15.959052 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:18.456224 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:20.458246 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:22.959403 838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:25.457847 838136 pod_ready.go:92] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"True"
I0316 18:12:25.457875 838136 pod_ready.go:81] duration metric: took 48.008087164s for pod "coredns-74ff55c5b-p8874" in "kube-system" namespace to be "Ready" ...
I0316 18:12:25.457890 838136 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:12:27.466917 838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:29.467072 838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:31.968920 838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:34.466799 838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:36.969785 838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:39.467148 838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:41.747146 838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:42.473125 838136 pod_ready.go:92] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"True"
I0316 18:12:42.473171 838136 pod_ready.go:81] duration metric: took 17.015273448s for pod "etcd-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:12:42.473192 838136 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:12:42.487303 838136 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"True"
I0316 18:12:42.487332 838136 pod_ready.go:81] duration metric: took 14.130108ms for pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:12:42.487343 838136 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:12:44.495468 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:46.496365 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:48.996268 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:50.996939 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:53.495283 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:55.498249 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:57.498739 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:12:59.997059 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:02.497715 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:04.995808 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:06.997556 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:09.502223 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:11.995230 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:13.996772 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:16.495974 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:18.497399 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:20.998583 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:23.495088 838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:25.495023 838136 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"True"
I0316 18:13:25.495095 838136 pod_ready.go:81] duration metric: took 43.007714174s for pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:13:25.495119 838136 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nvd4k" in "kube-system" namespace to be "Ready" ...
I0316 18:13:25.503545 838136 pod_ready.go:92] pod "kube-proxy-nvd4k" in "kube-system" namespace has status "Ready":"True"
I0316 18:13:25.503575 838136 pod_ready.go:81] duration metric: took 8.446901ms for pod "kube-proxy-nvd4k" in "kube-system" namespace to be "Ready" ...
I0316 18:13:25.503590 838136 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:13:25.511577 838136 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"True"
I0316 18:13:25.511608 838136 pod_ready.go:81] duration metric: took 8.009557ms for pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
I0316 18:13:25.511620 838136 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
I0316 18:13:27.520914 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:30.020574 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:32.520269 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:35.019671 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:37.019971 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:39.520618 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:42.019996 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:44.020764 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:46.519790 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:49.019724 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:51.020495 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:53.521024 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:56.019898 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:13:58.522343 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:01.018812 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:03.025763 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:05.519405 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:08.020013 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:10.519614 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:13.018496 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:15.021385 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:17.520865 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:20.023696 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:22.518491 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:24.518823 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:26.523460 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:28.527078 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:31.031993 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:33.522275 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:36.022529 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:38.521717 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:41.023808 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:43.520066 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:45.520182 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:47.521846 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:50.020453 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:52.021556 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:54.519667 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:56.520884 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:14:58.522239 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:01.020266 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:03.022120 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:05.520447 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:08.020488 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:10.518545 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:12.521483 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:15.019988 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:17.022626 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:19.522676 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:22.021070 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:24.021554 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:26.520510 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:29.020572 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:31.526496 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:34.022016 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:36.519921 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:38.520831 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:40.521307 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:43.019174 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:45.021664 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:47.519600 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:49.520987 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:51.522060 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:54.020471 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:56.020790 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:15:58.021958 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:00.023149 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:02.523023 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:05.021660 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:07.519158 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:09.520044 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:12.020492 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:14.521457 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:17.022695 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:19.621306 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:22.023069 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:24.519709 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:26.520133 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:28.521538 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:31.020524 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:33.520308 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:36.022479 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:38.521701 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:40.523678 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:43.022492 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:45.523895 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:47.524586 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:50.020159 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:52.518683 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:54.520757 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:56.521392 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:58.521540 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:01.019106 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:03.020683 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:05.520962 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:07.521498 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:10.019748 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:12.020707 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:14.519518 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:16.519651 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:19.019366 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:21.019491 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:23.021112 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:25.519500 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:25.519540 838136 pod_ready.go:81] duration metric: took 4m0.007912771s for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
E0316 18:17:25.519551 838136 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0316 18:17:25.519559 838136 pod_ready.go:38] duration metric: took 5m48.09067273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0316 18:17:25.519577 838136 api_server.go:52] waiting for apiserver process to appear ...
I0316 18:17:25.519614 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0316 18:17:25.519725 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0316 18:17:25.587023 838136 cri.go:89] found id: "84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438"
I0316 18:17:25.587057 838136 cri.go:89] found id: ""
I0316 18:17:25.587068 838136 logs.go:276] 1 containers: [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438]
I0316 18:17:25.587136 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.593870 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0316 18:17:25.593959 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0316 18:17:25.644646 838136 cri.go:89] found id: "2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9"
I0316 18:17:25.644677 838136 cri.go:89] found id: ""
I0316 18:17:25.644687 838136 logs.go:276] 1 containers: [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9]
I0316 18:17:25.644751 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.652161 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0316 18:17:25.652231 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0316 18:17:25.712920 838136 cri.go:89] found id: "61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77"
I0316 18:17:25.712955 838136 cri.go:89] found id: ""
I0316 18:17:25.712967 838136 logs.go:276] 1 containers: [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77]
I0316 18:17:25.713041 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.719028 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0316 18:17:25.719136 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0316 18:17:25.773897 838136 cri.go:89] found id: "34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c"
I0316 18:17:25.773927 838136 cri.go:89] found id: ""
I0316 18:17:25.773937 838136 logs.go:276] 1 containers: [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c]
I0316 18:17:25.774002 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.780138 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0316 18:17:25.780246 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0316 18:17:25.843279 838136 cri.go:89] found id: "d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd"
I0316 18:17:25.843309 838136 cri.go:89] found id: ""
I0316 18:17:25.843317 838136 logs.go:276] 1 containers: [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd]
I0316 18:17:25.843375 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.848956 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0316 18:17:25.849060 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0316 18:17:25.899592 838136 cri.go:89] found id: "05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554"
I0316 18:17:25.899624 838136 cri.go:89] found id: "162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72"
I0316 18:17:25.899630 838136 cri.go:89] found id: ""
I0316 18:17:25.899641 838136 logs.go:276] 2 containers: [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554 162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72]
I0316 18:17:25.899710 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.907916 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.918955 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0316 18:17:25.919046 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0316 18:17:25.971433 838136 cri.go:89] found id: ""
I0316 18:17:25.971478 838136 logs.go:276] 0 containers: []
W0316 18:17:25.971490 838136 logs.go:278] No container was found matching "kindnet"
I0316 18:17:25.971498 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0316 18:17:25.971572 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0316 18:17:26.021187 838136 cri.go:89] found id: "aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3"
I0316 18:17:26.021220 838136 cri.go:89] found id: ""
I0316 18:17:26.021229 838136 logs.go:276] 1 containers: [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3]
I0316 18:17:26.021296 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:26.028046 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0316 18:17:26.028122 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0316 18:17:26.086850 838136 cri.go:89] found id: "aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8"
I0316 18:17:26.086875 838136 cri.go:89] found id: "7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6"
I0316 18:17:26.086879 838136 cri.go:89] found id: ""
I0316 18:17:26.086887 838136 logs.go:276] 2 containers: [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8 7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6]
I0316 18:17:26.086940 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:26.093302 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:26.101414 838136 logs.go:123] Gathering logs for etcd [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9] ...
I0316 18:17:26.101443 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9"
I0316 18:17:26.171632 838136 logs.go:123] Gathering logs for coredns [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77] ...
I0316 18:17:26.171697 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77"
I0316 18:17:26.219764 838136 logs.go:123] Gathering logs for storage-provisioner [7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6] ...
I0316 18:17:26.219813 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6"
I0316 18:17:26.281101 838136 logs.go:123] Gathering logs for describe nodes ...
I0316 18:17:26.281153 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0316 18:17:26.484976 838136 logs.go:123] Gathering logs for kube-controller-manager [162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72] ...
I0316 18:17:26.485019 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72"
I0316 18:17:26.556929 838136 logs.go:123] Gathering logs for container status ...
I0316 18:17:26.556977 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0316 18:17:26.609552 838136 logs.go:123] Gathering logs for storage-provisioner [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8] ...
I0316 18:17:26.609594 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8"
I0316 18:17:26.656257 838136 logs.go:123] Gathering logs for kubelet ...
I0316 18:17:26.656294 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0316 18:17:26.698787 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:24 old-k8s-version-985498 kubelet[888]: E0316 18:11:24.452217 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-210505493 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/35: file exists"
W0316 18:17:26.703383 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:27 old-k8s-version-985498 kubelet[888]: E0316 18:11:27.530957 888 pod_workers.go:191] Error syncing pod 31a485c797dc9b239357ad3b694dc86e ("kube-apiserver-old-k8s-version-985498_kube-system(31a485c797dc9b239357ad3b694dc86e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3710715184 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/36: file exists"
W0316 18:17:26.705326 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:29 old-k8s-version-985498 kubelet[888]: E0316 18:11:29.589592 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
W0316 18:17:26.708845 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:33 old-k8s-version-985498 kubelet[888]: E0316 18:11:33.774758 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
W0316 18:17:26.713784 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:34 old-k8s-version-985498 kubelet[888]: E0316 18:11:34.296039 888 pod_workers.go:191] Error syncing pod d89b271f-838a-4592-b128-fcb2a06fc5e9 ("storage-provisioner_kube-system(d89b271f-838a-4592-b128-fcb2a06fc5e9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1431217611 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/38: file exists"
W0316 18:17:26.719803 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:37 old-k8s-version-985498 kubelet[888]: E0316 18:11:37.840851 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.719947 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:38 old-k8s-version-985498 kubelet[888]: E0316 18:11:38.487672 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.721883 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:48 old-k8s-version-985498 kubelet[888]: E0316 18:11:48.375825 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1993581407 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/41: file exists"
W0316 18:17:26.723186 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:48 old-k8s-version-985498 kubelet[888]: E0316 18:11:48.539670 888 pod_workers.go:191] Error syncing pod daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36 ("kube-proxy-nvd4k_kube-system(daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2911645386 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/42: file exists"
W0316 18:17:26.725902 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:50 old-k8s-version-985498 kubelet[888]: E0316 18:11:50.493127 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.727816 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:01 old-k8s-version-985498 kubelet[888]: E0316 18:12:01.388860 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2375308116 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/44: file exists"
W0316 18:17:26.727957 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:02 old-k8s-version-985498 kubelet[888]: E0316 18:12:02.347425 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.729296 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:04 old-k8s-version-985498 kubelet[888]: E0316 18:12:04.759315 888 pod_workers.go:191] Error syncing pod 9d1a1153-d964-4893-aae0-6b926755edf4 ("busybox_default(9d1a1153-d964-4893-aae0-6b926755edf4)"), skipping: failed to "StartContainer" for "busybox" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\": failed to prepare extraction snapshot \"extract-753167480-EI9m sha256:e49dd1e534d9df22f1c5041581eaeb3f23fc6ef51ac5a4963ab35adc8f056f65\": failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2174206111 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45: file exists"
W0316 18:17:26.729513 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:05 old-k8s-version-985498 kubelet[888]: E0316 18:12:05.583630 888 pod_workers.go:191] Error syncing pod 9d1a1153-d964-4893-aae0-6b926755edf4 ("busybox_default(9d1a1153-d964-4893-aae0-6b926755edf4)"), skipping: failed to "StartContainer" for "busybox" with ImagePullBackOff: "Back-off pulling image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
W0316 18:17:26.731335 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:17 old-k8s-version-985498 kubelet[888]: E0316 18:12:17.365731 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.732305 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:31 old-k8s-version-985498 kubelet[888]: E0316 18:12:31.362316 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.732729 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:38 old-k8s-version-985498 kubelet[888]: E0316 18:12:38.782628 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.732969 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:39 old-k8s-version-985498 kubelet[888]: E0316 18:12:39.791862 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.733111 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:43 old-k8s-version-985498 kubelet[888]: E0316 18:12:43.348091 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.733346 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:46 old-k8s-version-985498 kubelet[888]: E0316 18:12:46.689033 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.735058 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:58 old-k8s-version-985498 kubelet[888]: E0316 18:12:58.404260 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.735490 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:02 old-k8s-version-985498 kubelet[888]: E0316 18:13:02.883259 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.735729 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:06 old-k8s-version-985498 kubelet[888]: E0316 18:13:06.689066 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.735866 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:11 old-k8s-version-985498 kubelet[888]: E0316 18:13:11.347423 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.736102 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:20 old-k8s-version-985498 kubelet[888]: E0316 18:13:20.346818 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.736237 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:22 old-k8s-version-985498 kubelet[888]: E0316 18:13:22.349160 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.736374 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:34 old-k8s-version-985498 kubelet[888]: E0316 18:13:34.347075 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.736801 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:36 old-k8s-version-985498 kubelet[888]: E0316 18:13:36.006325 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.737037 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:37 old-k8s-version-985498 kubelet[888]: E0316 18:13:37.013902 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.737173 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:46 old-k8s-version-985498 kubelet[888]: E0316 18:13:46.347475 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.737421 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:51 old-k8s-version-985498 kubelet[888]: E0316 18:13:51.347194 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.737556 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:58 old-k8s-version-985498 kubelet[888]: E0316 18:13:58.348592 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.737794 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:03 old-k8s-version-985498 kubelet[888]: E0316 18:14:03.346460 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.737933 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:09 old-k8s-version-985498 kubelet[888]: E0316 18:14:09.347794 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.738169 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:15 old-k8s-version-985498 kubelet[888]: E0316 18:14:15.348212 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.739915 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:21 old-k8s-version-985498 kubelet[888]: E0316 18:14:21.360852 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.740357 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:29 old-k8s-version-985498 kubelet[888]: E0316 18:14:29.175538 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.740493 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:32 old-k8s-version-985498 kubelet[888]: E0316 18:14:32.348500 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.740728 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:36 old-k8s-version-985498 kubelet[888]: E0316 18:14:36.689558 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.740867 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:46 old-k8s-version-985498 kubelet[888]: E0316 18:14:46.348058 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.741102 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:49 old-k8s-version-985498 kubelet[888]: E0316 18:14:49.347315 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.741235 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:57 old-k8s-version-985498 kubelet[888]: E0316 18:14:57.349480 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.741471 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:03 old-k8s-version-985498 kubelet[888]: E0316 18:15:03.346815 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.741606 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:10 old-k8s-version-985498 kubelet[888]: E0316 18:15:10.347187 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.741845 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:18 old-k8s-version-985498 kubelet[888]: E0316 18:15:18.346934 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.741980 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:25 old-k8s-version-985498 kubelet[888]: E0316 18:15:25.347491 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.742249 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:29 old-k8s-version-985498 kubelet[888]: E0316 18:15:29.347101 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.742385 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:39 old-k8s-version-985498 kubelet[888]: E0316 18:15:39.347176 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.742620 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:42 old-k8s-version-985498 kubelet[888]: E0316 18:15:42.347133 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.742754 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:50 old-k8s-version-985498 kubelet[888]: E0316 18:15:50.348255 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.743180 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:58 old-k8s-version-985498 kubelet[888]: E0316 18:15:58.519929 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.743316 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:03 old-k8s-version-985498 kubelet[888]: E0316 18:16:03.347044 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.743562 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:06 old-k8s-version-985498 kubelet[888]: E0316 18:16:06.689281 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.743697 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:17 old-k8s-version-985498 kubelet[888]: E0316 18:16:17.347194 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.743937 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:19 old-k8s-version-985498 kubelet[888]: E0316 18:16:19.346699 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.744072 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:30 old-k8s-version-985498 kubelet[888]: E0316 18:16:30.348163 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.744308 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:34 old-k8s-version-985498 kubelet[888]: E0316 18:16:34.346242 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.744441 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:41 old-k8s-version-985498 kubelet[888]: E0316 18:16:41.347306 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.744677 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:49 old-k8s-version-985498 kubelet[888]: E0316 18:16:49.347088 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.744816 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.745050 838136 logs.go:138] Found kubelet problem: Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.746768 838136 logs.go:138] Found kubelet problem: Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.747010 838136 logs.go:138] Found kubelet problem: Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.747145 838136 logs.go:138] Found kubelet problem: Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0316 18:17:26.747156 838136 logs.go:123] Gathering logs for dmesg ...
I0316 18:17:26.747172 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0316 18:17:26.766207 838136 logs.go:123] Gathering logs for kube-scheduler [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c] ...
I0316 18:17:26.766251 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c"
I0316 18:17:26.823871 838136 logs.go:123] Gathering logs for kube-proxy [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd] ...
I0316 18:17:26.823920 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd"
I0316 18:17:26.870843 838136 logs.go:123] Gathering logs for kube-controller-manager [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554] ...
I0316 18:17:26.870883 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554"
I0316 18:17:26.940409 838136 logs.go:123] Gathering logs for kubernetes-dashboard [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3] ...
I0316 18:17:26.940460 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3"
I0316 18:17:26.987147 838136 logs.go:123] Gathering logs for kube-apiserver [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438] ...
I0316 18:17:26.987189 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438"
I0316 18:17:27.062021 838136 logs.go:123] Gathering logs for containerd ...
I0316 18:17:27.062071 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0316 18:17:27.136063 838136 out.go:304] Setting ErrFile to fd 2...
I0316 18:17:27.136101 838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0316 18:17:27.136179 838136 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0316 18:17:27.136198 838136 out.go:239] Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:27.136211 838136 out.go:239] Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:27.136229 838136 out.go:239] Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:27.136246 838136 out.go:239] Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:27.136263 838136 out.go:239] Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0316 18:17:27.136276 838136 out.go:304] Setting ErrFile to fd 2...
I0316 18:17:27.136283 838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 18:17:37.137763 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:17:37.159011 838136 api_server.go:72] duration metric: took 6m0.980190849s to wait for apiserver process to appear ...
I0316 18:17:37.159048 838136 api_server.go:88] waiting for apiserver healthz status ...
I0316 18:17:37.161262 838136 out.go:177]
W0316 18:17:37.162843 838136 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
W0316 18:17:37.162874 838136 out.go:239] *
*
W0316 18:17:37.163764 838136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0316 18:17:37.165696 838136 out.go:177]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-985498 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-985498 -n old-k8s-version-985498
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p old-k8s-version-985498 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-985498 logs -n 25: (1.563416581s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| image | embed-certs-831781 image list | embed-certs-831781 | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
| | --format=json | | | | | |
| pause | -p embed-certs-831781 | embed-certs-831781 | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p embed-certs-831781 | embed-certs-831781 | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p embed-certs-831781 | embed-certs-831781 | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
| delete | -p embed-certs-831781 | embed-certs-831781 | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
| start | -p newest-cni-993416 --memory=2200 --alsologtostderr | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:16 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.29.0-rc.2 | | | | | |
| image | no-preload-738074 image list | no-preload-738074 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-738074 | no-preload-738074 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-738074 | no-preload-738074 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-738074 | no-preload-738074 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| delete | -p no-preload-738074 | no-preload-738074 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| image | default-k8s-diff-port-683490 | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | image list --format=json | | | | | |
| pause | -p | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | default-k8s-diff-port-683490 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | default-k8s-diff-port-683490 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | default-k8s-diff-port-683490 | | | | | |
| delete | -p | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | default-k8s-diff-port-683490 | | | | | |
| addons | enable metrics-server -p newest-cni-993416 | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-993416 | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-993416 | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-993416 --memory=2200 --alsologtostderr | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:17 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.29.0-rc.2 | | | | | |
| image | newest-cni-993416 image list | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-993416 | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-993416 | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-993416 | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
| delete | -p newest-cni-993416 | newest-cni-993416 | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/03/16 18:16:53
Running on machine: ubuntu-20-agent-12
Binary: Built with gc go1.22.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0316 18:16:53.227422 841431 out.go:291] Setting OutFile to fd 1 ...
I0316 18:16:53.228035 841431 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 18:16:53.228055 841431 out.go:304] Setting ErrFile to fd 2...
I0316 18:16:53.228062 841431 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 18:16:53.228570 841431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
I0316 18:16:53.229606 841431 out.go:298] Setting JSON to false
I0316 18:16:53.230645 841431 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":86360,"bootTime":1710526653,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0316 18:16:53.230723 841431 start.go:139] virtualization: kvm guest
I0316 18:16:53.233024 841431 out.go:177] * [newest-cni-993416] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0316 18:16:53.234895 841431 out.go:177] - MINIKUBE_LOCATION=18277
I0316 18:16:53.234951 841431 notify.go:220] Checking for updates...
I0316 18:16:53.236410 841431 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0316 18:16:53.237994 841431 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
I0316 18:16:53.239420 841431 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
I0316 18:16:53.240653 841431 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0316 18:16:53.241899 841431 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0316 18:16:53.243743 841431 config.go:182] Loaded profile config "newest-cni-993416": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
I0316 18:16:53.244162 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:16:53.244226 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:16:53.260630 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43931
I0316 18:16:53.261234 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:16:53.261919 841431 main.go:141] libmachine: Using API Version 1
I0316 18:16:53.261944 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:16:53.262404 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:16:53.262690 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:16:53.263030 841431 driver.go:392] Setting default libvirt URI to qemu:///system
I0316 18:16:53.263339 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:16:53.263378 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:16:53.279157 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
I0316 18:16:53.279747 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:16:53.280271 841431 main.go:141] libmachine: Using API Version 1
I0316 18:16:53.280294 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:16:53.280635 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:16:53.280850 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:16:53.320020 841431 out.go:177] * Using the kvm2 driver based on existing profile
I0316 18:16:53.321474 841431 start.go:297] selected driver: kvm2
I0316 18:16:53.321503 841431 start.go:901] validating driver "kvm2" against &{Name:newest-cni-993416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:fals
e system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0316 18:16:53.321648 841431 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0316 18:16:53.322409 841431 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0316 18:16:53.322488 841431 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18277-781196/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0316 18:16:53.339422 841431 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
I0316 18:16:53.339952 841431 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0316 18:16:53.340030 841431 cni.go:84] Creating CNI manager for ""
I0316 18:16:53.340045 841431 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0316 18:16:53.340083 841431 start.go:340] cluster config:
{Name:newest-cni-993416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0316 18:16:53.340193 841431 iso.go:125] acquiring lock: {Name:mk48d016d8d435147389d59734ec7ed09e828db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0316 18:16:53.342110 841431 out.go:177] * Starting "newest-cni-993416" primary control-plane node in "newest-cni-993416" cluster
I0316 18:16:53.343482 841431 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
I0316 18:16:53.343551 841431 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
I0316 18:16:53.343565 841431 cache.go:56] Caching tarball of preloaded images
I0316 18:16:53.343690 841431 preload.go:173] Found /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0316 18:16:53.343716 841431 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on containerd
I0316 18:16:53.343850 841431 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/config.json ...
I0316 18:16:53.344068 841431 start.go:360] acquireMachinesLock for newest-cni-993416: {Name:mkf97f06937f9fa972ee38e81e5f88859912f65f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0316 18:16:53.344163 841431 start.go:364] duration metric: took 72.742µs to acquireMachinesLock for "newest-cni-993416"
I0316 18:16:53.344180 841431 start.go:96] Skipping create...Using existing machine configuration
I0316 18:16:53.344186 841431 fix.go:54] fixHost starting:
I0316 18:16:53.344487 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:16:53.344525 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:16:53.360544 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37539
I0316 18:16:53.361046 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:16:53.361568 841431 main.go:141] libmachine: Using API Version 1
I0316 18:16:53.361590 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:16:53.361978 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:16:53.362212 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:16:53.362394 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
I0316 18:16:53.364378 841431 fix.go:112] recreateIfNeeded on newest-cni-993416: state=Stopped err=<nil>
I0316 18:16:53.364411 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
W0316 18:16:53.364597 841431 fix.go:138] unexpected machine state, will restart: <nil>
I0316 18:16:53.367250 841431 out.go:177] * Restarting existing kvm2 VM for "newest-cni-993416" ...
I0316 18:16:50.020159 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:52.518683 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:53.368632 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Start
I0316 18:16:53.368897 841431 main.go:141] libmachine: (newest-cni-993416) Ensuring networks are active...
I0316 18:16:53.369842 841431 main.go:141] libmachine: (newest-cni-993416) Ensuring network default is active
I0316 18:16:53.370156 841431 main.go:141] libmachine: (newest-cni-993416) Ensuring network mk-newest-cni-993416 is active
I0316 18:16:53.370552 841431 main.go:141] libmachine: (newest-cni-993416) Getting domain xml...
I0316 18:16:53.371486 841431 main.go:141] libmachine: (newest-cni-993416) Creating domain...
I0316 18:16:54.638792 841431 main.go:141] libmachine: (newest-cni-993416) Waiting to get IP...
I0316 18:16:54.639743 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:54.640202 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:54.640246 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:54.640159 841466 retry.go:31] will retry after 208.50444ms: waiting for machine to come up
I0316 18:16:54.850948 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:54.851402 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:54.851470 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:54.851350 841466 retry.go:31] will retry after 359.013848ms: waiting for machine to come up
I0316 18:16:55.212276 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:55.212780 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:55.212816 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:55.212697 841466 retry.go:31] will retry after 307.020465ms: waiting for machine to come up
I0316 18:16:55.521507 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:55.522128 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:55.522160 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:55.522086 841466 retry.go:31] will retry after 542.340519ms: waiting for machine to come up
I0316 18:16:56.065858 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:56.066417 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:56.066443 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:56.066360 841466 retry.go:31] will retry after 542.386197ms: waiting for machine to come up
I0316 18:16:56.610202 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:56.610597 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:56.610633 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:56.610569 841466 retry.go:31] will retry after 665.676296ms: waiting for machine to come up
I0316 18:16:57.278214 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:57.278730 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:57.278759 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:57.278684 841466 retry.go:31] will retry after 913.154561ms: waiting for machine to come up
I0316 18:16:58.193848 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:58.194327 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:58.194347 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:58.194264 841466 retry.go:31] will retry after 918.549294ms: waiting for machine to come up
I0316 18:16:54.520757 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:56.521392 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:58.521540 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:16:59.114563 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:16:59.115081 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:16:59.115110 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:59.115060 841466 retry.go:31] will retry after 1.640225957s: waiting for machine to come up
I0316 18:17:00.756565 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:00.757032 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:17:00.757064 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:00.756967 841466 retry.go:31] will retry after 1.524971609s: waiting for machine to come up
I0316 18:17:02.283964 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:02.284601 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:17:02.284637 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:02.284534 841466 retry.go:31] will retry after 2.005667021s: waiting for machine to come up
I0316 18:17:01.019106 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:03.020683 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:04.291575 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:04.292157 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:17:04.292184 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:04.292082 841466 retry.go:31] will retry after 2.262780898s: waiting for machine to come up
I0316 18:17:06.557963 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:06.558485 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:17:06.558531 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:06.558429 841466 retry.go:31] will retry after 3.717938959s: waiting for machine to come up
I0316 18:17:05.520962 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:07.521498 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:10.279363 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:10.279979 841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
I0316 18:17:10.280013 841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:10.279896 841466 retry.go:31] will retry after 4.612576288s: waiting for machine to come up
I0316 18:17:10.019748 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:12.020707 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:14.894517 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:14.895091 841431 main.go:141] libmachine: (newest-cni-993416) Found IP for machine: 192.168.72.228
I0316 18:17:14.895117 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has current primary IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:14.895123 841431 main.go:141] libmachine: (newest-cni-993416) Reserving static IP address...
I0316 18:17:14.895619 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "newest-cni-993416", mac: "52:54:00:73:0d:0a", ip: "192.168.72.228"} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:14.895668 841431 main.go:141] libmachine: (newest-cni-993416) DBG | skip adding static IP to network mk-newest-cni-993416 - found existing host DHCP lease matching {name: "newest-cni-993416", mac: "52:54:00:73:0d:0a", ip: "192.168.72.228"}
I0316 18:17:14.895682 841431 main.go:141] libmachine: (newest-cni-993416) Reserved static IP address: 192.168.72.228
I0316 18:17:14.895695 841431 main.go:141] libmachine: (newest-cni-993416) Waiting for SSH to be available...
I0316 18:17:14.895711 841431 main.go:141] libmachine: (newest-cni-993416) DBG | Getting to WaitForSSH function...
I0316 18:17:14.898142 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:14.898527 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:14.898562 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:14.898672 841431 main.go:141] libmachine: (newest-cni-993416) DBG | Using SSH client type: external
I0316 18:17:14.898706 841431 main.go:141] libmachine: (newest-cni-993416) DBG | Using SSH private key: /home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa (-rw-------)
I0316 18:17:14.898730 841431 main.go:141] libmachine: (newest-cni-993416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa -p 22] /usr/bin/ssh <nil>}
I0316 18:17:14.898741 841431 main.go:141] libmachine: (newest-cni-993416) DBG | About to run SSH command:
I0316 18:17:14.898758 841431 main.go:141] libmachine: (newest-cni-993416) DBG | exit 0
I0316 18:17:15.036536 841431 main.go:141] libmachine: (newest-cni-993416) DBG | SSH cmd err, output: <nil>:
I0316 18:17:15.036959 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetConfigRaw
I0316 18:17:15.037625 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetIP
I0316 18:17:15.040416 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.040862 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.040901 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.041163 841431 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/config.json ...
I0316 18:17:15.041566 841431 machine.go:94] provisionDockerMachine start ...
I0316 18:17:15.041598 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:15.041905 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:15.044592 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.044969 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.045012 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.045186 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:15.045443 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.045620 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.045755 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:15.045935 841431 main.go:141] libmachine: Using SSH client type: native
I0316 18:17:15.046253 841431 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil> [] 0s} 192.168.72.228 22 <nil> <nil>}
I0316 18:17:15.046270 841431 main.go:141] libmachine: About to run SSH command:
hostname
I0316 18:17:15.165086 841431 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0316 18:17:15.165121 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetMachineName
I0316 18:17:15.165450 841431 buildroot.go:166] provisioning hostname "newest-cni-993416"
I0316 18:17:15.165479 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetMachineName
I0316 18:17:15.165697 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:15.168728 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.169061 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.169102 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.169253 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:15.169477 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.169664 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.169813 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:15.169947 841431 main.go:141] libmachine: Using SSH client type: native
I0316 18:17:15.170167 841431 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil> [] 0s} 192.168.72.228 22 <nil> <nil>}
I0316 18:17:15.170187 841431 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-993416 && echo "newest-cni-993416" | sudo tee /etc/hostname
I0316 18:17:15.308584 841431 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-993416
I0316 18:17:15.308618 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:15.311584 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.311985 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.312017 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.312250 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:15.312508 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.312667 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.312780 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:15.312985 841431 main.go:141] libmachine: Using SSH client type: native
I0316 18:17:15.313177 841431 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil> [] 0s} 192.168.72.228 22 <nil> <nil>}
I0316 18:17:15.313203 841431 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-993416' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-993416/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-993416' | sudo tee -a /etc/hosts;
fi
fi
I0316 18:17:15.445260 841431 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0316 18:17:15.445295 841431 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18277-781196/.minikube CaCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18277-781196/.minikube}
I0316 18:17:15.445351 841431 buildroot.go:174] setting up certificates
I0316 18:17:15.445362 841431 provision.go:84] configureAuth start
I0316 18:17:15.445376 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetMachineName
I0316 18:17:15.445750 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetIP
I0316 18:17:15.448920 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.449246 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.449275 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.449422 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:15.451623 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.452046 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.452096 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.452243 841431 provision.go:143] copyHostCerts
I0316 18:17:15.452326 841431 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem, removing ...
I0316 18:17:15.452338 841431 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem
I0316 18:17:15.452405 841431 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem (1675 bytes)
I0316 18:17:15.452522 841431 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem, removing ...
I0316 18:17:15.452532 841431 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem
I0316 18:17:15.452563 841431 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem (1082 bytes)
I0316 18:17:15.452660 841431 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem, removing ...
I0316 18:17:15.452676 841431 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem
I0316 18:17:15.452719 841431 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem (1123 bytes)
I0316 18:17:15.452818 841431 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem org=jenkins.newest-cni-993416 san=[127.0.0.1 192.168.72.228 localhost minikube newest-cni-993416]
I0316 18:17:15.565115 841431 provision.go:177] copyRemoteCerts
I0316 18:17:15.565188 841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0316 18:17:15.565228 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:15.568227 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.568683 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.568713 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.569003 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:15.569248 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.569484 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:15.569685 841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
I0316 18:17:15.660879 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0316 18:17:15.691404 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0316 18:17:15.725806 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0316 18:17:15.755915 841431 provision.go:87] duration metric: took 310.536281ms to configureAuth
I0316 18:17:15.755947 841431 buildroot.go:189] setting minikube options for container-runtime
I0316 18:17:15.756143 841431 config.go:182] Loaded profile config "newest-cni-993416": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
I0316 18:17:15.756154 841431 machine.go:97] duration metric: took 714.570228ms to provisionDockerMachine
I0316 18:17:15.756163 841431 start.go:293] postStartSetup for "newest-cni-993416" (driver="kvm2")
I0316 18:17:15.756177 841431 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0316 18:17:15.756212 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:15.756603 841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0316 18:17:15.756655 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:15.759498 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.759902 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.759931 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.760147 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:15.760360 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.760511 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:15.760640 841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
I0316 18:17:15.853438 841431 ssh_runner.go:195] Run: cat /etc/os-release
I0316 18:17:15.858894 841431 info.go:137] Remote host: Buildroot 2023.02.9
I0316 18:17:15.858927 841431 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-781196/.minikube/addons for local assets ...
I0316 18:17:15.858987 841431 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-781196/.minikube/files for local assets ...
I0316 18:17:15.859061 841431 filesync.go:149] local asset: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem -> 7884422.pem in /etc/ssl/certs
I0316 18:17:15.859151 841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0316 18:17:15.872026 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem --> /etc/ssl/certs/7884422.pem (1708 bytes)
I0316 18:17:15.901994 841431 start.go:296] duration metric: took 145.809588ms for postStartSetup
I0316 18:17:15.902056 841431 fix.go:56] duration metric: took 22.557868796s for fixHost
I0316 18:17:15.902086 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:15.905039 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.905391 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:15.905422 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:15.905734 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:15.905939 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.906099 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:15.906230 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:15.906386 841431 main.go:141] libmachine: Using SSH client type: native
I0316 18:17:15.906652 841431 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil> [] 0s} 192.168.72.228 22 <nil> <nil>}
I0316 18:17:15.906668 841431 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0316 18:17:16.025541 841431 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710613036.006722537
I0316 18:17:16.025567 841431 fix.go:216] guest clock: 1710613036.006722537
I0316 18:17:16.025577 841431 fix.go:229] Guest: 2024-03-16 18:17:16.006722537 +0000 UTC Remote: 2024-03-16 18:17:15.902062825 +0000 UTC m=+22.725621869 (delta=104.659712ms)
I0316 18:17:16.025634 841431 fix.go:200] guest clock delta is within tolerance: 104.659712ms
I0316 18:17:16.025641 841431 start.go:83] releasing machines lock for "newest-cni-993416", held for 22.681465652s
I0316 18:17:16.025671 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:16.025987 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetIP
I0316 18:17:16.028606 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:16.028956 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:16.028982 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:16.029138 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:16.029766 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:16.030018 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:16.030150 841431 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0316 18:17:16.030235 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:16.030305 841431 ssh_runner.go:195] Run: cat /version.json
I0316 18:17:16.030333 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:16.033028 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:16.033349 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:16.033393 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:16.033416 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:16.033554 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:16.033791 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:16.033902 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:16.033929 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:16.033963 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:16.034038 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:16.034148 841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
I0316 18:17:16.034265 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:16.034456 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:16.034640 841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
I0316 18:17:16.118048 841431 ssh_runner.go:195] Run: systemctl --version
I0316 18:17:16.146259 841431 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0316 18:17:16.154503 841431 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0316 18:17:16.154585 841431 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0316 18:17:16.177501 841431 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0316 18:17:16.177539 841431 start.go:494] detecting cgroup driver to use...
I0316 18:17:16.177624 841431 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0316 18:17:16.214268 841431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0316 18:17:16.231541 841431 docker.go:217] disabling cri-docker service (if available) ...
I0316 18:17:16.231611 841431 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0316 18:17:16.249494 841431 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0316 18:17:16.266543 841431 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0316 18:17:16.396368 841431 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0316 18:17:16.568119 841431 docker.go:233] disabling docker service ...
I0316 18:17:16.568275 841431 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0316 18:17:16.587606 841431 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0316 18:17:16.603814 841431 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0316 18:17:16.753806 841431 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0316 18:17:16.907508 841431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0316 18:17:16.925332 841431 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0316 18:17:16.950811 841431 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0316 18:17:16.966511 841431 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0316 18:17:16.981307 841431 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0316 18:17:16.981402 841431 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0316 18:17:16.995896 841431 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0316 18:17:17.010189 841431 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0316 18:17:17.027988 841431 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0316 18:17:17.042158 841431 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0316 18:17:17.056955 841431 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0316 18:17:17.071564 841431 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0316 18:17:17.084678 841431 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0316 18:17:17.084760 841431 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0316 18:17:17.102942 841431 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0316 18:17:17.116045 841431 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0316 18:17:17.254390 841431 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0316 18:17:17.288841 841431 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
I0316 18:17:17.288923 841431 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0316 18:17:17.294823 841431 retry.go:31] will retry after 1.431471638s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0316 18:17:18.727391 841431 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0316 18:17:18.733834 841431 start.go:562] Will wait 60s for crictl version
I0316 18:17:18.733903 841431 ssh_runner.go:195] Run: which crictl
I0316 18:17:18.739046 841431 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0316 18:17:18.791238 841431 start.go:578] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.14
RuntimeApiVersion: v1
I0316 18:17:18.791309 841431 ssh_runner.go:195] Run: containerd --version
I0316 18:17:18.830819 841431 ssh_runner.go:195] Run: containerd --version
I0316 18:17:18.872315 841431 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on containerd 1.7.14 ...
I0316 18:17:18.873653 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetIP
I0316 18:17:18.876402 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:18.876758 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:18.876791 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:18.876986 841431 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0316 18:17:18.882277 841431 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0316 18:17:18.902779 841431 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I0316 18:17:14.519518 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:16.519651 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:19.019366 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:18.904376 841431 kubeadm.go:877] updating cluster {Name:newest-cni-993416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:t
rue] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0316 18:17:18.904552 841431 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
I0316 18:17:18.904644 841431 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 18:17:18.951816 841431 containerd.go:612] all images are preloaded for containerd runtime.
I0316 18:17:18.951843 841431 containerd.go:519] Images already preloaded, skipping extraction
I0316 18:17:18.951903 841431 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 18:17:18.998694 841431 containerd.go:612] all images are preloaded for containerd runtime.
I0316 18:17:18.998725 841431 cache_images.go:84] Images are preloaded, skipping loading
I0316 18:17:18.998737 841431 kubeadm.go:928] updating node { 192.168.72.228 8443 v1.29.0-rc.2 containerd true true} ...
I0316 18:17:18.998890 841431 kubeadm.go:940] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-993416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.228
[Install]
config:
{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0316 18:17:18.998969 841431 ssh_runner.go:195] Run: sudo crictl info
I0316 18:17:19.053845 841431 cni.go:84] Creating CNI manager for ""
I0316 18:17:19.053877 841431 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0316 18:17:19.053894 841431 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
I0316 18:17:19.053947 841431 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.228 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-993416 NodeName:newest-cni-993416 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] F
eatureArgs:map[] NodeIP:192.168.72.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0316 18:17:19.054110 841431 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.228
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "newest-cni-993416"
kubeletExtraArgs:
node-ip: 192.168.72.228
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.228"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
feature-gates: "ServerSideApply=true"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
feature-gates: "ServerSideApply=true"
leader-elect: "false"
scheduler:
extraArgs:
feature-gates: "ServerSideApply=true"
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.29.0-rc.2
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0316 18:17:19.054203 841431 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
I0316 18:17:19.069549 841431 binaries.go:44] Found k8s binaries, skipping transfer
I0316 18:17:19.069638 841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0316 18:17:19.081418 841431 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
I0316 18:17:19.102862 841431 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
I0316 18:17:19.124134 841431 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2306 bytes)
I0316 18:17:19.146599 841431 ssh_runner.go:195] Run: grep 192.168.72.228 control-plane.minikube.internal$ /etc/hosts
I0316 18:17:19.151909 841431 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.228 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0316 18:17:19.169197 841431 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0316 18:17:19.309000 841431 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0316 18:17:19.331332 841431 certs.go:68] Setting up /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416 for IP: 192.168.72.228
I0316 18:17:19.331366 841431 certs.go:194] generating shared ca certs ...
I0316 18:17:19.331389 841431 certs.go:226] acquiring lock for ca certs: {Name:mk0c50354a81ee6e126f21f3d5a16214134194fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0316 18:17:19.331568 841431 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18277-781196/.minikube/ca.key
I0316 18:17:19.331608 841431 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.key
I0316 18:17:19.331616 841431 certs.go:256] generating profile certs ...
I0316 18:17:19.331738 841431 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/client.key
I0316 18:17:19.331835 841431 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/apiserver.key.6606b315
I0316 18:17:19.331885 841431 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/proxy-client.key
I0316 18:17:19.331987 841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442.pem (1338 bytes)
W0316 18:17:19.332021 841431 certs.go:480] ignoring /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442_empty.pem, impossibly tiny 0 bytes
I0316 18:17:19.332029 841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem (1679 bytes)
I0316 18:17:19.332050 841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem (1082 bytes)
I0316 18:17:19.332074 841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem (1123 bytes)
I0316 18:17:19.332101 841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem (1675 bytes)
I0316 18:17:19.332138 841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem (1708 bytes)
I0316 18:17:19.332941 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0316 18:17:19.371244 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0316 18:17:19.412285 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0316 18:17:19.450101 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0316 18:17:19.485371 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0316 18:17:19.521337 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0316 18:17:19.560592 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0316 18:17:19.597429 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0316 18:17:19.631736 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem --> /usr/share/ca-certificates/7884422.pem (1708 bytes)
I0316 18:17:19.662038 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0316 18:17:19.693854 841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442.pem --> /usr/share/ca-certificates/788442.pem (1338 bytes)
I0316 18:17:19.726417 841431 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0316 18:17:19.749016 841431 ssh_runner.go:195] Run: openssl version
I0316 18:17:19.756280 841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7884422.pem && ln -fs /usr/share/ca-certificates/7884422.pem /etc/ssl/certs/7884422.pem"
I0316 18:17:19.771479 841431 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7884422.pem
I0316 18:17:19.777588 841431 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 16 17:02 /usr/share/ca-certificates/7884422.pem
I0316 18:17:19.777667 841431 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7884422.pem
I0316 18:17:19.785507 841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7884422.pem /etc/ssl/certs/3ec20f2e.0"
I0316 18:17:19.802306 841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0316 18:17:19.818636 841431 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0316 18:17:19.825230 841431 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 16 16:56 /usr/share/ca-certificates/minikubeCA.pem
I0316 18:17:19.825307 841431 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0316 18:17:19.832744 841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0316 18:17:19.847571 841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/788442.pem && ln -fs /usr/share/ca-certificates/788442.pem /etc/ssl/certs/788442.pem"
I0316 18:17:19.862872 841431 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/788442.pem
I0316 18:17:19.869402 841431 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 16 17:02 /usr/share/ca-certificates/788442.pem
I0316 18:17:19.869490 841431 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/788442.pem
I0316 18:17:19.876895 841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/788442.pem /etc/ssl/certs/51391683.0"
I0316 18:17:19.892130 841431 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0316 18:17:19.898268 841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0316 18:17:19.905980 841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0316 18:17:19.913801 841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0316 18:17:19.921756 841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0316 18:17:19.930123 841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0316 18:17:19.938266 841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0316 18:17:19.946303 841431 kubeadm.go:391] StartCluster: {Name:newest-cni-993416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true
] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0316 18:17:19.946404 841431 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0316 18:17:19.946466 841431 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0316 18:17:19.998436 841431 cri.go:89] found id: "a71833ecc67de27b7e2cff17605a6f58252d6af66e62ca9d2e011e312ba56200"
I0316 18:17:19.998471 841431 cri.go:89] found id: "69aa0a81debd0b781ccced3e81eb778e7a148b96b1bde020d132d5e5684a75f5"
I0316 18:17:19.998478 841431 cri.go:89] found id: "0edf488fb1cbfad331fbc504372cd2726a4af55918a333176a2c7e1487eda0b3"
I0316 18:17:19.998483 841431 cri.go:89] found id: "e091404a6139a0f992e59474c4c3d5acaea8d175b13b3704508458556f16aef6"
I0316 18:17:19.998496 841431 cri.go:89] found id: "761688729782830a759f339f0603d5276a117c549be3230363a12e289e688a01"
I0316 18:17:19.998505 841431 cri.go:89] found id: "d404131e07cded2bda65abc2bc08661a3f501956c1431d91b53ef2d61bdc6ff7"
I0316 18:17:19.998508 841431 cri.go:89] found id: "3f2ee94758eaa4175186f378d85fe346d2fc5f3ea161a1325e1d24593be3d5bc"
I0316 18:17:19.998513 841431 cri.go:89] found id: "6f27aa35bed1c441bf6062b1ab25c5cf18e127bd3891d160dd7c26c0f29af1f2"
I0316 18:17:19.998517 841431 cri.go:89] found id: ""
I0316 18:17:19.998571 841431 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0316 18:17:20.016557 841431 cri.go:116] JSON = null
W0316 18:17:20.016625 841431 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
I0316 18:17:20.016712 841431 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
W0316 18:17:20.030189 841431 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
I0316 18:17:20.030216 841431 kubeadm.go:407] found existing configuration files, will attempt cluster restart
I0316 18:17:20.030221 841431 kubeadm.go:587] restartPrimaryControlPlane start ...
I0316 18:17:20.030266 841431 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0316 18:17:20.043013 841431 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0316 18:17:20.043748 841431 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-993416" does not appear in /home/jenkins/minikube-integration/18277-781196/kubeconfig
I0316 18:17:20.044051 841431 kubeconfig.go:62] /home/jenkins/minikube-integration/18277-781196/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-993416" cluster setting kubeconfig missing "newest-cni-993416" context setting]
I0316 18:17:20.044591 841431 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-781196/kubeconfig: {Name:mke76908283b58e263a226954335fd60fd02692a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0316 18:17:20.046076 841431 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0316 18:17:20.059175 841431 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.228
I0316 18:17:20.059227 841431 kubeadm.go:1154] stopping kube-system containers ...
I0316 18:17:20.059243 841431 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0316 18:17:20.059329 841431 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0316 18:17:20.103617 841431 cri.go:89] found id: "a71833ecc67de27b7e2cff17605a6f58252d6af66e62ca9d2e011e312ba56200"
I0316 18:17:20.103643 841431 cri.go:89] found id: "69aa0a81debd0b781ccced3e81eb778e7a148b96b1bde020d132d5e5684a75f5"
I0316 18:17:20.103647 841431 cri.go:89] found id: "0edf488fb1cbfad331fbc504372cd2726a4af55918a333176a2c7e1487eda0b3"
I0316 18:17:20.103650 841431 cri.go:89] found id: "e091404a6139a0f992e59474c4c3d5acaea8d175b13b3704508458556f16aef6"
I0316 18:17:20.103653 841431 cri.go:89] found id: "761688729782830a759f339f0603d5276a117c549be3230363a12e289e688a01"
I0316 18:17:20.103657 841431 cri.go:89] found id: "d404131e07cded2bda65abc2bc08661a3f501956c1431d91b53ef2d61bdc6ff7"
I0316 18:17:20.103660 841431 cri.go:89] found id: "3f2ee94758eaa4175186f378d85fe346d2fc5f3ea161a1325e1d24593be3d5bc"
I0316 18:17:20.103664 841431 cri.go:89] found id: "6f27aa35bed1c441bf6062b1ab25c5cf18e127bd3891d160dd7c26c0f29af1f2"
I0316 18:17:20.103668 841431 cri.go:89] found id: ""
I0316 18:17:20.103677 841431 cri.go:234] Stopping containers: [a71833ecc67de27b7e2cff17605a6f58252d6af66e62ca9d2e011e312ba56200 69aa0a81debd0b781ccced3e81eb778e7a148b96b1bde020d132d5e5684a75f5 0edf488fb1cbfad331fbc504372cd2726a4af55918a333176a2c7e1487eda0b3 e091404a6139a0f992e59474c4c3d5acaea8d175b13b3704508458556f16aef6 761688729782830a759f339f0603d5276a117c549be3230363a12e289e688a01 d404131e07cded2bda65abc2bc08661a3f501956c1431d91b53ef2d61bdc6ff7 3f2ee94758eaa4175186f378d85fe346d2fc5f3ea161a1325e1d24593be3d5bc 6f27aa35bed1c441bf6062b1ab25c5cf18e127bd3891d160dd7c26c0f29af1f2]
I0316 18:17:20.103748 841431 ssh_runner.go:195] Run: which crictl
I0316 18:17:20.109013 841431 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a71833ecc67de27b7e2cff17605a6f58252d6af66e62ca9d2e011e312ba56200 69aa0a81debd0b781ccced3e81eb778e7a148b96b1bde020d132d5e5684a75f5 0edf488fb1cbfad331fbc504372cd2726a4af55918a333176a2c7e1487eda0b3 e091404a6139a0f992e59474c4c3d5acaea8d175b13b3704508458556f16aef6 761688729782830a759f339f0603d5276a117c549be3230363a12e289e688a01 d404131e07cded2bda65abc2bc08661a3f501956c1431d91b53ef2d61bdc6ff7 3f2ee94758eaa4175186f378d85fe346d2fc5f3ea161a1325e1d24593be3d5bc 6f27aa35bed1c441bf6062b1ab25c5cf18e127bd3891d160dd7c26c0f29af1f2
I0316 18:17:20.154788 841431 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0316 18:17:20.173228 841431 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0316 18:17:20.185106 841431 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0316 18:17:20.185133 841431 kubeadm.go:156] found existing configuration files:
I0316 18:17:20.185190 841431 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0316 18:17:20.196457 841431 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0316 18:17:20.196535 841431 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0316 18:17:20.208090 841431 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0316 18:17:20.219476 841431 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0316 18:17:20.219594 841431 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0316 18:17:20.231087 841431 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0316 18:17:20.242471 841431 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0316 18:17:20.242539 841431 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0316 18:17:20.254512 841431 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0316 18:17:20.266221 841431 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0316 18:17:20.266313 841431 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0316 18:17:20.278335 841431 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0316 18:17:20.291364 841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:17:20.441748 841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:17:21.552425 841431 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110633969s)
I0316 18:17:21.552480 841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:17:21.787500 841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:17:21.883417 841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:17:21.979379 841431 api_server.go:52] waiting for apiserver process to appear ...
I0316 18:17:21.979505 841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:17:22.479612 841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:17:22.980465 841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:17:21.019491 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:23.021112 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:23.480359 841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:17:23.512208 841431 api_server.go:72] duration metric: took 1.53285958s to wait for apiserver process to appear ...
I0316 18:17:23.512244 841431 api_server.go:88] waiting for apiserver healthz status ...
I0316 18:17:23.512269 841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
I0316 18:17:23.512848 841431 api_server.go:269] stopped: https://192.168.72.228:8443/healthz: Get "https://192.168.72.228:8443/healthz": dial tcp 192.168.72.228:8443: connect: connection refused
I0316 18:17:24.012400 841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
I0316 18:17:26.387879 841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0316 18:17:26.387946 841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0316 18:17:26.387968 841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
I0316 18:17:26.417506 841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0316 18:17:26.417545 841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0316 18:17:26.512809 841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
I0316 18:17:26.525228 841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0316 18:17:26.525276 841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0316 18:17:27.012795 841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
I0316 18:17:27.024678 841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0316 18:17:27.024722 841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0316 18:17:27.513345 841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
I0316 18:17:27.530929 841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0316 18:17:27.530980 841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0316 18:17:28.012475 841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
I0316 18:17:28.017944 841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 200:
ok
I0316 18:17:28.025825 841431 api_server.go:141] control plane version: v1.29.0-rc.2
I0316 18:17:28.025883 841431 api_server.go:131] duration metric: took 4.513628784s to wait for apiserver health ...
I0316 18:17:28.025897 841431 cni.go:84] Creating CNI manager for ""
I0316 18:17:28.025907 841431 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0316 18:17:28.027996 841431 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0316 18:17:28.029481 841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0316 18:17:28.042768 841431 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0316 18:17:28.073448 841431 system_pods.go:43] waiting for kube-system pods to appear ...
I0316 18:17:28.085932 841431 system_pods.go:59] 9 kube-system pods found
I0316 18:17:28.085981 841431 system_pods.go:61] "coredns-76f75df574-hkkkh" [efd50172-4179-4235-adcf-2cc14383680d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0316 18:17:28.085991 841431 system_pods.go:61] "coredns-76f75df574-rhrkz" [3f5fe20f-4f2b-4dad-ab54-c00261ce77fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0316 18:17:28.086002 841431 system_pods.go:61] "etcd-newest-cni-993416" [f9d9e16d-4c48-41ef-954d-84b2adc1d678] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0316 18:17:28.086021 841431 system_pods.go:61] "kube-apiserver-newest-cni-993416" [b745c8a8-8c3a-48a8-8884-8952190b871e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0316 18:17:28.086032 841431 system_pods.go:61] "kube-controller-manager-newest-cni-993416" [d0879001-bfc2-4268-a421-9257bc6155cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0316 18:17:28.086041 841431 system_pods.go:61] "kube-proxy-lbfnv" [4269401d-14f7-4752-a7df-ec3f9da042d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0316 18:17:28.086055 841431 system_pods.go:61] "kube-scheduler-newest-cni-993416" [53741680-de3a-449b-ab2b-a520bc8c2c54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0316 18:17:28.086067 841431 system_pods.go:61] "metrics-server-57f55c9bc5-rbrmj" [3eabea78-4346-49ea-ada5-72c98a6daa7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0316 18:17:28.086081 841431 system_pods.go:61] "storage-provisioner" [0d551c52-212b-4b92-9b76-e1034e2d8d0b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0316 18:17:28.086098 841431 system_pods.go:74] duration metric: took 12.609767ms to wait for pod list to return data ...
I0316 18:17:28.086110 841431 node_conditions.go:102] verifying NodePressure condition ...
I0316 18:17:28.095367 841431 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0316 18:17:28.095404 841431 node_conditions.go:123] node cpu capacity is 2
I0316 18:17:28.095470 841431 node_conditions.go:105] duration metric: took 9.349036ms to run NodePressure ...
I0316 18:17:28.095509 841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0316 18:17:28.403986 841431 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0316 18:17:28.424295 841431 ops.go:34] apiserver oom_adj: -16
I0316 18:17:28.424329 841431 kubeadm.go:591] duration metric: took 8.394102538s to restartPrimaryControlPlane
I0316 18:17:28.424343 841431 kubeadm.go:393] duration metric: took 8.478062582s to StartCluster
I0316 18:17:28.424368 841431 settings.go:142] acquiring lock: {Name:mk5e1e3433840176063e5baa5db7056716046a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0316 18:17:28.424472 841431 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/18277-781196/kubeconfig
I0316 18:17:28.425801 841431 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-781196/kubeconfig: {Name:mke76908283b58e263a226954335fd60fd02692a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0316 18:17:28.426202 841431 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0316 18:17:28.427700 841431 out.go:177] * Verifying Kubernetes components...
I0316 18:17:28.426291 841431 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
I0316 18:17:28.426509 841431 config.go:182] Loaded profile config "newest-cni-993416": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
I0316 18:17:28.429281 841431 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0316 18:17:28.427842 841431 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-993416"
I0316 18:17:28.429391 841431 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-993416"
W0316 18:17:28.429410 841431 addons.go:243] addon storage-provisioner should already be in state true
I0316 18:17:28.427844 841431 addons.go:69] Setting dashboard=true in profile "newest-cni-993416"
I0316 18:17:28.429450 841431 host.go:66] Checking if "newest-cni-993416" exists ...
I0316 18:17:28.429469 841431 addons.go:234] Setting addon dashboard=true in "newest-cni-993416"
W0316 18:17:28.429481 841431 addons.go:243] addon dashboard should already be in state true
I0316 18:17:28.429509 841431 host.go:66] Checking if "newest-cni-993416" exists ...
I0316 18:17:28.427858 841431 addons.go:69] Setting default-storageclass=true in profile "newest-cni-993416"
I0316 18:17:28.429616 841431 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-993416"
I0316 18:17:28.427872 841431 addons.go:69] Setting metrics-server=true in profile "newest-cni-993416"
I0316 18:17:28.429723 841431 addons.go:234] Setting addon metrics-server=true in "newest-cni-993416"
W0316 18:17:28.429738 841431 addons.go:243] addon metrics-server should already be in state true
I0316 18:17:28.429777 841431 host.go:66] Checking if "newest-cni-993416" exists ...
I0316 18:17:28.429889 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.429936 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.429953 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.429996 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.430042 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.430073 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.430169 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.430210 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.447013 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43297
I0316 18:17:28.447559 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.448208 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.448238 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.448677 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.449343 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.449398 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.451831 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
I0316 18:17:28.451847 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
I0316 18:17:28.452339 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.452533 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.453149 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.453169 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.453289 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.453307 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.453621 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.453815 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.454315 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.454370 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.454605 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
I0316 18:17:28.455348 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37275
I0316 18:17:28.456170 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.456672 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.456692 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.457050 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.457637 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.457695 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.458326 841431 addons.go:234] Setting addon default-storageclass=true in "newest-cni-993416"
W0316 18:17:28.458344 841431 addons.go:243] addon default-storageclass should already be in state true
I0316 18:17:28.458374 841431 host.go:66] Checking if "newest-cni-993416" exists ...
I0316 18:17:28.458734 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.458778 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.471779 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
I0316 18:17:28.471775 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
I0316 18:17:28.472290 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.472402 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.472843 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.472868 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.472994 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.473017 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.473334 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.473346 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.473512 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
I0316 18:17:28.473688 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
I0316 18:17:28.475749 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:28.478042 841431 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0316 18:17:28.476260 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:28.479470 841431 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0316 18:17:28.479493 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0316 18:17:28.479525 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:28.481120 841431 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0316 18:17:28.481537 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
I0316 18:17:28.482639 841431 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0316 18:17:28.484048 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:28.483378 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.484125 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0316 18:17:28.484141 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0316 18:17:28.484243 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:28.483617 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:28.484315 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:28.484341 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:28.484373 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:28.484491 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:28.484689 841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
I0316 18:17:28.485783 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.485810 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.486348 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.487057 841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 18:17:28.487114 841431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 18:17:28.487350 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
I0316 18:17:28.487634 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:28.487833 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.488082 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:28.488108 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:28.488442 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.488472 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.488482 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:28.488683 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:28.488860 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.488898 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:28.489070 841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
I0316 18:17:28.489173 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
I0316 18:17:28.490917 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:28.493057 841431 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0316 18:17:25.519500 838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
I0316 18:17:25.519540 838136 pod_ready.go:81] duration metric: took 4m0.007912771s for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
E0316 18:17:25.519551 838136 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0316 18:17:25.519559 838136 pod_ready.go:38] duration metric: took 5m48.09067273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0316 18:17:25.519577 838136 api_server.go:52] waiting for apiserver process to appear ...
I0316 18:17:25.519614 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0316 18:17:25.519725 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0316 18:17:25.587023 838136 cri.go:89] found id: "84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438"
I0316 18:17:25.587057 838136 cri.go:89] found id: ""
I0316 18:17:25.587068 838136 logs.go:276] 1 containers: [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438]
I0316 18:17:25.587136 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.593870 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0316 18:17:25.593959 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0316 18:17:25.644646 838136 cri.go:89] found id: "2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9"
I0316 18:17:25.644677 838136 cri.go:89] found id: ""
I0316 18:17:25.644687 838136 logs.go:276] 1 containers: [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9]
I0316 18:17:25.644751 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.652161 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0316 18:17:25.652231 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0316 18:17:25.712920 838136 cri.go:89] found id: "61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77"
I0316 18:17:25.712955 838136 cri.go:89] found id: ""
I0316 18:17:25.712967 838136 logs.go:276] 1 containers: [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77]
I0316 18:17:25.713041 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.719028 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0316 18:17:25.719136 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0316 18:17:25.773897 838136 cri.go:89] found id: "34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c"
I0316 18:17:25.773927 838136 cri.go:89] found id: ""
I0316 18:17:25.773937 838136 logs.go:276] 1 containers: [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c]
I0316 18:17:25.774002 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.780138 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0316 18:17:25.780246 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0316 18:17:25.843279 838136 cri.go:89] found id: "d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd"
I0316 18:17:25.843309 838136 cri.go:89] found id: ""
I0316 18:17:25.843317 838136 logs.go:276] 1 containers: [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd]
I0316 18:17:25.843375 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.848956 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0316 18:17:25.849060 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0316 18:17:25.899592 838136 cri.go:89] found id: "05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554"
I0316 18:17:25.899624 838136 cri.go:89] found id: "162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72"
I0316 18:17:25.899630 838136 cri.go:89] found id: ""
I0316 18:17:25.899641 838136 logs.go:276] 2 containers: [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554 162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72]
I0316 18:17:25.899710 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.907916 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:25.918955 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0316 18:17:25.919046 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0316 18:17:25.971433 838136 cri.go:89] found id: ""
I0316 18:17:25.971478 838136 logs.go:276] 0 containers: []
W0316 18:17:25.971490 838136 logs.go:278] No container was found matching "kindnet"
I0316 18:17:25.971498 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0316 18:17:25.971572 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0316 18:17:26.021187 838136 cri.go:89] found id: "aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3"
I0316 18:17:26.021220 838136 cri.go:89] found id: ""
I0316 18:17:26.021229 838136 logs.go:276] 1 containers: [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3]
I0316 18:17:26.021296 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:26.028046 838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0316 18:17:26.028122 838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0316 18:17:26.086850 838136 cri.go:89] found id: "aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8"
I0316 18:17:26.086875 838136 cri.go:89] found id: "7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6"
I0316 18:17:26.086879 838136 cri.go:89] found id: ""
I0316 18:17:26.086887 838136 logs.go:276] 2 containers: [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8 7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6]
I0316 18:17:26.086940 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:26.093302 838136 ssh_runner.go:195] Run: which crictl
I0316 18:17:26.101414 838136 logs.go:123] Gathering logs for etcd [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9] ...
I0316 18:17:26.101443 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9"
I0316 18:17:26.171632 838136 logs.go:123] Gathering logs for coredns [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77] ...
I0316 18:17:26.171697 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77"
I0316 18:17:26.219764 838136 logs.go:123] Gathering logs for storage-provisioner [7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6] ...
I0316 18:17:26.219813 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6"
I0316 18:17:26.281101 838136 logs.go:123] Gathering logs for describe nodes ...
I0316 18:17:26.281153 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0316 18:17:26.484976 838136 logs.go:123] Gathering logs for kube-controller-manager [162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72] ...
I0316 18:17:26.485019 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72"
I0316 18:17:26.556929 838136 logs.go:123] Gathering logs for container status ...
I0316 18:17:26.556977 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0316 18:17:26.609552 838136 logs.go:123] Gathering logs for storage-provisioner [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8] ...
I0316 18:17:26.609594 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8"
I0316 18:17:26.656257 838136 logs.go:123] Gathering logs for kubelet ...
I0316 18:17:26.656294 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0316 18:17:26.698787 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:24 old-k8s-version-985498 kubelet[888]: E0316 18:11:24.452217 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-210505493 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/35: file exists"
W0316 18:17:26.703383 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:27 old-k8s-version-985498 kubelet[888]: E0316 18:11:27.530957 888 pod_workers.go:191] Error syncing pod 31a485c797dc9b239357ad3b694dc86e ("kube-apiserver-old-k8s-version-985498_kube-system(31a485c797dc9b239357ad3b694dc86e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3710715184 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/36: file exists"
W0316 18:17:26.705326 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:29 old-k8s-version-985498 kubelet[888]: E0316 18:11:29.589592 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
W0316 18:17:26.708845 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:33 old-k8s-version-985498 kubelet[888]: E0316 18:11:33.774758 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
W0316 18:17:26.713784 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:34 old-k8s-version-985498 kubelet[888]: E0316 18:11:34.296039 888 pod_workers.go:191] Error syncing pod d89b271f-838a-4592-b128-fcb2a06fc5e9 ("storage-provisioner_kube-system(d89b271f-838a-4592-b128-fcb2a06fc5e9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1431217611 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/38: file exists"
W0316 18:17:26.719803 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:37 old-k8s-version-985498 kubelet[888]: E0316 18:11:37.840851 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.719947 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:38 old-k8s-version-985498 kubelet[888]: E0316 18:11:38.487672 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.721883 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:48 old-k8s-version-985498 kubelet[888]: E0316 18:11:48.375825 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1993581407 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/41: file exists"
W0316 18:17:26.723186 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:48 old-k8s-version-985498 kubelet[888]: E0316 18:11:48.539670 888 pod_workers.go:191] Error syncing pod daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36 ("kube-proxy-nvd4k_kube-system(daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2911645386 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/42: file exists"
W0316 18:17:26.725902 838136 logs.go:138] Found kubelet problem: Mar 16 18:11:50 old-k8s-version-985498 kubelet[888]: E0316 18:11:50.493127 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.727816 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:01 old-k8s-version-985498 kubelet[888]: E0316 18:12:01.388860 888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2375308116 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/44: file exists"
W0316 18:17:26.727957 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:02 old-k8s-version-985498 kubelet[888]: E0316 18:12:02.347425 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.729296 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:04 old-k8s-version-985498 kubelet[888]: E0316 18:12:04.759315 888 pod_workers.go:191] Error syncing pod 9d1a1153-d964-4893-aae0-6b926755edf4 ("busybox_default(9d1a1153-d964-4893-aae0-6b926755edf4)"), skipping: failed to "StartContainer" for "busybox" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\": failed to prepare extraction snapshot \"extract-753167480-EI9m sha256:e49dd1e534d9df22f1c5041581eaeb3f23fc6ef51ac5a4963ab35adc8f056f65\": failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2174206111 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45: file exists"
W0316 18:17:26.729513 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:05 old-k8s-version-985498 kubelet[888]: E0316 18:12:05.583630 888 pod_workers.go:191] Error syncing pod 9d1a1153-d964-4893-aae0-6b926755edf4 ("busybox_default(9d1a1153-d964-4893-aae0-6b926755edf4)"), skipping: failed to "StartContainer" for "busybox" with ImagePullBackOff: "Back-off pulling image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
W0316 18:17:26.731335 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:17 old-k8s-version-985498 kubelet[888]: E0316 18:12:17.365731 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.732305 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:31 old-k8s-version-985498 kubelet[888]: E0316 18:12:31.362316 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.732729 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:38 old-k8s-version-985498 kubelet[888]: E0316 18:12:38.782628 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.732969 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:39 old-k8s-version-985498 kubelet[888]: E0316 18:12:39.791862 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.733111 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:43 old-k8s-version-985498 kubelet[888]: E0316 18:12:43.348091 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.733346 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:46 old-k8s-version-985498 kubelet[888]: E0316 18:12:46.689033 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.735058 838136 logs.go:138] Found kubelet problem: Mar 16 18:12:58 old-k8s-version-985498 kubelet[888]: E0316 18:12:58.404260 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.735490 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:02 old-k8s-version-985498 kubelet[888]: E0316 18:13:02.883259 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.735729 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:06 old-k8s-version-985498 kubelet[888]: E0316 18:13:06.689066 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.735866 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:11 old-k8s-version-985498 kubelet[888]: E0316 18:13:11.347423 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.736102 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:20 old-k8s-version-985498 kubelet[888]: E0316 18:13:20.346818 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.736237 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:22 old-k8s-version-985498 kubelet[888]: E0316 18:13:22.349160 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.736374 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:34 old-k8s-version-985498 kubelet[888]: E0316 18:13:34.347075 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.736801 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:36 old-k8s-version-985498 kubelet[888]: E0316 18:13:36.006325 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.737037 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:37 old-k8s-version-985498 kubelet[888]: E0316 18:13:37.013902 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.737173 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:46 old-k8s-version-985498 kubelet[888]: E0316 18:13:46.347475 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.737421 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:51 old-k8s-version-985498 kubelet[888]: E0316 18:13:51.347194 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.737556 838136 logs.go:138] Found kubelet problem: Mar 16 18:13:58 old-k8s-version-985498 kubelet[888]: E0316 18:13:58.348592 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.737794 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:03 old-k8s-version-985498 kubelet[888]: E0316 18:14:03.346460 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.737933 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:09 old-k8s-version-985498 kubelet[888]: E0316 18:14:09.347794 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.738169 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:15 old-k8s-version-985498 kubelet[888]: E0316 18:14:15.348212 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.739915 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:21 old-k8s-version-985498 kubelet[888]: E0316 18:14:21.360852 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.740357 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:29 old-k8s-version-985498 kubelet[888]: E0316 18:14:29.175538 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.740493 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:32 old-k8s-version-985498 kubelet[888]: E0316 18:14:32.348500 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.740728 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:36 old-k8s-version-985498 kubelet[888]: E0316 18:14:36.689558 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.740867 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:46 old-k8s-version-985498 kubelet[888]: E0316 18:14:46.348058 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.741102 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:49 old-k8s-version-985498 kubelet[888]: E0316 18:14:49.347315 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.741235 838136 logs.go:138] Found kubelet problem: Mar 16 18:14:57 old-k8s-version-985498 kubelet[888]: E0316 18:14:57.349480 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.741471 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:03 old-k8s-version-985498 kubelet[888]: E0316 18:15:03.346815 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.741606 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:10 old-k8s-version-985498 kubelet[888]: E0316 18:15:10.347187 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.741845 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:18 old-k8s-version-985498 kubelet[888]: E0316 18:15:18.346934 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.741980 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:25 old-k8s-version-985498 kubelet[888]: E0316 18:15:25.347491 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.742249 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:29 old-k8s-version-985498 kubelet[888]: E0316 18:15:29.347101 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.742385 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:39 old-k8s-version-985498 kubelet[888]: E0316 18:15:39.347176 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.742620 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:42 old-k8s-version-985498 kubelet[888]: E0316 18:15:42.347133 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.742754 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:50 old-k8s-version-985498 kubelet[888]: E0316 18:15:50.348255 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.743180 838136 logs.go:138] Found kubelet problem: Mar 16 18:15:58 old-k8s-version-985498 kubelet[888]: E0316 18:15:58.519929 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.743316 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:03 old-k8s-version-985498 kubelet[888]: E0316 18:16:03.347044 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.743562 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:06 old-k8s-version-985498 kubelet[888]: E0316 18:16:06.689281 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.743697 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:17 old-k8s-version-985498 kubelet[888]: E0316 18:16:17.347194 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.743937 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:19 old-k8s-version-985498 kubelet[888]: E0316 18:16:19.346699 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.744072 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:30 old-k8s-version-985498 kubelet[888]: E0316 18:16:30.348163 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.744308 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:34 old-k8s-version-985498 kubelet[888]: E0316 18:16:34.346242 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.744441 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:41 old-k8s-version-985498 kubelet[888]: E0316 18:16:41.347306 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.744677 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:49 old-k8s-version-985498 kubelet[888]: E0316 18:16:49.347088 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.744816 838136 logs.go:138] Found kubelet problem: Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:26.745050 838136 logs.go:138] Found kubelet problem: Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.746768 838136 logs.go:138] Found kubelet problem: Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:26.747010 838136 logs.go:138] Found kubelet problem: Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:26.747145 838136 logs.go:138] Found kubelet problem: Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0316 18:17:26.747156 838136 logs.go:123] Gathering logs for dmesg ...
I0316 18:17:26.747172 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0316 18:17:26.766207 838136 logs.go:123] Gathering logs for kube-scheduler [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c] ...
I0316 18:17:26.766251 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c"
I0316 18:17:26.823871 838136 logs.go:123] Gathering logs for kube-proxy [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd] ...
I0316 18:17:26.823920 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd"
I0316 18:17:26.870843 838136 logs.go:123] Gathering logs for kube-controller-manager [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554] ...
I0316 18:17:26.870883 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554"
I0316 18:17:26.940409 838136 logs.go:123] Gathering logs for kubernetes-dashboard [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3] ...
I0316 18:17:26.940460 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3"
I0316 18:17:26.987147 838136 logs.go:123] Gathering logs for kube-apiserver [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438] ...
I0316 18:17:26.987189 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438"
I0316 18:17:27.062021 838136 logs.go:123] Gathering logs for containerd ...
I0316 18:17:27.062071 838136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0316 18:17:27.136063 838136 out.go:304] Setting ErrFile to fd 2...
I0316 18:17:27.136101 838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0316 18:17:27.136179 838136 out.go:239] X Problems detected in kubelet:
W0316 18:17:27.136198 838136 out.go:239] Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0316 18:17:27.136211 838136 out.go:239] Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:27.136229 838136 out.go:239] Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
W0316 18:17:27.136246 838136 out.go:239] Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
W0316 18:17:27.136263 838136 out.go:239] Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0316 18:17:27.136276 838136 out.go:304] Setting ErrFile to fd 2...
I0316 18:17:27.136283 838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 18:17:28.494615 841431 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0316 18:17:28.494636 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0316 18:17:28.494664 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:28.498412 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:28.498867 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:28.498902 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:28.499137 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:28.499360 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:28.499603 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:28.499803 841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
I0316 18:17:28.507069 841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42097
I0316 18:17:28.507685 841431 main.go:141] libmachine: () Calling .GetVersion
I0316 18:17:28.508358 841431 main.go:141] libmachine: Using API Version 1
I0316 18:17:28.508388 841431 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 18:17:28.508855 841431 main.go:141] libmachine: () Calling .GetMachineName
I0316 18:17:28.509080 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
I0316 18:17:28.510986 841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
I0316 18:17:28.511289 841431 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
I0316 18:17:28.511318 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0316 18:17:28.511342 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
I0316 18:17:28.515154 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:28.515818 841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
I0316 18:17:28.515843 841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
I0316 18:17:28.516129 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
I0316 18:17:28.516364 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
I0316 18:17:28.516500 841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
I0316 18:17:28.516679 841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
I0316 18:17:28.691344 841431 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0316 18:17:28.716545 841431 api_server.go:52] waiting for apiserver process to appear ...
I0316 18:17:28.716654 841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:17:28.740425 841431 api_server.go:72] duration metric: took 314.168152ms to wait for apiserver process to appear ...
I0316 18:17:28.740454 841431 api_server.go:88] waiting for apiserver healthz status ...
I0316 18:17:28.740473 841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
I0316 18:17:28.753421 841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 200:
ok
I0316 18:17:28.755222 841431 api_server.go:141] control plane version: v1.29.0-rc.2
I0316 18:17:28.755253 841431 api_server.go:131] duration metric: took 14.791646ms to wait for apiserver health ...
I0316 18:17:28.755263 841431 system_pods.go:43] waiting for kube-system pods to appear ...
I0316 18:17:28.766459 841431 system_pods.go:59] 9 kube-system pods found
I0316 18:17:28.766499 841431 system_pods.go:61] "coredns-76f75df574-hkkkh" [efd50172-4179-4235-adcf-2cc14383680d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0316 18:17:28.766512 841431 system_pods.go:61] "coredns-76f75df574-rhrkz" [3f5fe20f-4f2b-4dad-ab54-c00261ce77fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0316 18:17:28.766520 841431 system_pods.go:61] "etcd-newest-cni-993416" [f9d9e16d-4c48-41ef-954d-84b2adc1d678] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0316 18:17:28.766526 841431 system_pods.go:61] "kube-apiserver-newest-cni-993416" [b745c8a8-8c3a-48a8-8884-8952190b871e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0316 18:17:28.766532 841431 system_pods.go:61] "kube-controller-manager-newest-cni-993416" [d0879001-bfc2-4268-a421-9257bc6155cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0316 18:17:28.766540 841431 system_pods.go:61] "kube-proxy-lbfnv" [4269401d-14f7-4752-a7df-ec3f9da042d0] Running
I0316 18:17:28.766584 841431 system_pods.go:61] "kube-scheduler-newest-cni-993416" [53741680-de3a-449b-ab2b-a520bc8c2c54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0316 18:17:28.766593 841431 system_pods.go:61] "metrics-server-57f55c9bc5-rbrmj" [3eabea78-4346-49ea-ada5-72c98a6daa7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0316 18:17:28.766598 841431 system_pods.go:61] "storage-provisioner" [0d551c52-212b-4b92-9b76-e1034e2d8d0b] Running
I0316 18:17:28.766604 841431 system_pods.go:74] duration metric: took 11.334758ms to wait for pod list to return data ...
I0316 18:17:28.766612 841431 default_sa.go:34] waiting for default service account to be created ...
I0316 18:17:28.772813 841431 default_sa.go:45] found service account: "default"
I0316 18:17:28.772841 841431 default_sa.go:55] duration metric: took 6.223203ms for default service account to be created ...
I0316 18:17:28.772853 841431 kubeadm.go:576] duration metric: took 346.603392ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0316 18:17:28.772869 841431 node_conditions.go:102] verifying NodePressure condition ...
I0316 18:17:28.782511 841431 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0316 18:17:28.782538 841431 node_conditions.go:123] node cpu capacity is 2
I0316 18:17:28.782550 841431 node_conditions.go:105] duration metric: took 9.676004ms to run NodePressure ...
I0316 18:17:28.782562 841431 start.go:240] waiting for startup goroutines ...
I0316 18:17:28.813219 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0316 18:17:28.813256 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0316 18:17:28.858302 841431 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0316 18:17:28.861227 841431 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0316 18:17:28.886196 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0316 18:17:28.886233 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0316 18:17:28.983213 841431 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0316 18:17:28.983243 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0316 18:17:28.989906 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0316 18:17:28.989932 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0316 18:17:29.121908 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0316 18:17:29.121935 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0316 18:17:29.124194 841431 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0316 18:17:29.124236 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0316 18:17:29.210699 841431 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0316 18:17:29.210731 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0316 18:17:29.258617 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
I0316 18:17:29.258661 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0316 18:17:29.360734 841431 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0316 18:17:29.383687 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0316 18:17:29.383712 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0316 18:17:29.461299 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0316 18:17:29.461340 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0316 18:17:29.515787 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0316 18:17:29.515831 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0316 18:17:29.593488 841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0316 18:17:29.593525 841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0316 18:17:29.669463 841431 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0316 18:17:30.700709 841431 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.842355565s)
I0316 18:17:30.700792 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:30.700808 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:30.700883 841431 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.340115776s)
I0316 18:17:30.700967 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:30.700994 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:30.701312 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:30.701331 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:30.701351 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:30.701363 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:30.701516 841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
I0316 18:17:30.701561 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:30.701594 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:30.701607 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:30.700801 841431 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.839529172s)
I0316 18:17:30.701662 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:30.701676 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:30.701622 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:30.701822 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:30.701844 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:30.702168 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:30.702181 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:30.702190 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:30.702197 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:30.702313 841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
I0316 18:17:30.702386 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:30.702691 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:30.702704 841431 addons.go:470] Verifying addon metrics-server=true in "newest-cni-993416"
I0316 18:17:30.702483 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:30.702787 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:30.702587 841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
I0316 18:17:30.711143 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:30.711187 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:30.711543 841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
I0316 18:17:30.711601 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:30.711626 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:31.260529 841431 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.590970849s)
I0316 18:17:31.260604 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:31.260620 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:31.261040 841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
I0316 18:17:31.261069 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:31.261120 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:31.261129 841431 main.go:141] libmachine: Making call to close driver server
I0316 18:17:31.261137 841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
I0316 18:17:31.261437 841431 main.go:141] libmachine: Successfully made call to close driver server
I0316 18:17:31.261459 841431 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 18:17:31.263509 841431 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-993416 addons enable metrics-server
I0316 18:17:31.265109 841431 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
I0316 18:17:31.266627 841431 addons.go:505] duration metric: took 2.840342384s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
I0316 18:17:31.266687 841431 start.go:245] waiting for cluster config update ...
I0316 18:17:31.266702 841431 start.go:254] writing updated cluster config ...
I0316 18:17:31.266974 841431 ssh_runner.go:195] Run: rm -f paused
I0316 18:17:31.321868 841431 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
I0316 18:17:31.323761 841431 out.go:177] * Done! kubectl is now configured to use "newest-cni-993416" cluster and "default" namespace by default
I0316 18:17:37.137763 838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0316 18:17:37.159011 838136 api_server.go:72] duration metric: took 6m0.980190849s to wait for apiserver process to appear ...
I0316 18:17:37.159048 838136 api_server.go:88] waiting for apiserver healthz status ...
I0316 18:17:37.161262 838136 out.go:177]
W0316 18:17:37.162843 838136 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
W0316 18:17:37.162874 838136 out.go:239] *
W0316 18:17:37.163764 838136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0316 18:17:37.165696 838136 out.go:177]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
4575a17a262fe 523cad1a4df73 About a minute ago Exited dashboard-metrics-scraper 5 fa71cda057018 dashboard-metrics-scraper-8d5bb5db8-sztdk
aba262227c6f6 07655ddf2eebe 4 minutes ago Running kubernetes-dashboard 0 33855e9a8d54b kubernetes-dashboard-cd95d586-656nk
89c765def3f3a 56cc512116c8f 5 minutes ago Running busybox 0 7057fc81b7e07 busybox
05061990c3ccf b9fa1895dcaa6 5 minutes ago Running kube-controller-manager 1 5beca916d73cc kube-controller-manager-old-k8s-version-985498
aa120a5aa0d88 6e38f40d628db 5 minutes ago Running storage-provisioner 1 0879e17dc3891 storage-provisioner
d73b58bba3532 10cc881966cfd 5 minutes ago Running kube-proxy 0 57eefc4089687 kube-proxy-nvd4k
61efb30968d2b bfe3a36ebd252 5 minutes ago Running coredns 0 bfd9c69418b66 coredns-74ff55c5b-p8874
7ed441150c733 6e38f40d628db 6 minutes ago Exited storage-provisioner 0 0879e17dc3891 storage-provisioner
84cebb4cfc43d ca9843d3b5454 6 minutes ago Running kube-apiserver 0 a118956a32a95 kube-apiserver-old-k8s-version-985498
2434210f6c63b 0369cf4303ffd 6 minutes ago Running etcd 0 5c82c8921bb2b etcd-old-k8s-version-985498
162132fbe06fe b9fa1895dcaa6 6 minutes ago Exited kube-controller-manager 0 5beca916d73cc kube-controller-manager-old-k8s-version-985498
34b075a6e3dfe 3138b6e3d4712 6 minutes ago Running kube-scheduler 0 30ac6cb133c85 kube-scheduler-old-k8s-version-985498
==> containerd <==
Mar 16 18:14:21 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:21.357676112Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Mar 16 18:14:21 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:21.359968980Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Mar 16 18:14:21 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:21.360097886Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.350109352Z" level=info msg="CreateContainer within sandbox \"fa71cda057018df49c80e723cf5af396685445ce28a9407dff4fef15f719ecb4\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.381226527Z" level=info msg="CreateContainer within sandbox \"fa71cda057018df49c80e723cf5af396685445ce28a9407dff4fef15f719ecb4\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\""
Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.382660562Z" level=info msg="StartContainer for \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\""
Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.508182987Z" level=info msg="StartContainer for \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\" returns successfully"
Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.569953767Z" level=info msg="shim disconnected" id=00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc namespace=k8s.io
Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.570131684Z" level=warning msg="cleaning up after shim disconnected" id=00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc namespace=k8s.io
Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.570263917Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Mar 16 18:14:29 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:29.182786591Z" level=info msg="RemoveContainer for \"962b79b13ecab4697776bb614c5c4f1d9268a209277dfcfa5e541e5bf59f9c0f\""
Mar 16 18:14:29 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:29.192113313Z" level=info msg="RemoveContainer for \"962b79b13ecab4697776bb614c5c4f1d9268a209277dfcfa5e541e5bf59f9c0f\" returns successfully"
Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.351505859Z" level=info msg="CreateContainer within sandbox \"fa71cda057018df49c80e723cf5af396685445ce28a9407dff4fef15f719ecb4\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.385557581Z" level=info msg="CreateContainer within sandbox \"fa71cda057018df49c80e723cf5af396685445ce28a9407dff4fef15f719ecb4\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439\""
Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.387778244Z" level=info msg="StartContainer for \"4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439\""
Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.555444587Z" level=info msg="StartContainer for \"4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439\" returns successfully"
Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.641017202Z" level=info msg="shim disconnected" id=4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439 namespace=k8s.io
Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.641124050Z" level=warning msg="cleaning up after shim disconnected" id=4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439 namespace=k8s.io
Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.641143875Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Mar 16 18:15:58 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:58.529322943Z" level=info msg="RemoveContainer for \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\""
Mar 16 18:15:58 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:58.535806748Z" level=info msg="RemoveContainer for \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\" returns successfully"
Mar 16 18:17:08 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:17:08.348926444Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:17:08 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:17:08.358550650Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Mar 16 18:17:08 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:17:08.361090149Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Mar 16 18:17:08 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:17:08.361281019Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77] <==
I0316 18:12:18.847750 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-16 18:11:48.845664254 +0000 UTC m=+0.081462614) (total time: 30.001433121s):
Trace[2019727887]: [30.001433121s] [30.001433121s] END
E0316 18:12:18.847898 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0316 18:12:18.847999 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-16 18:11:48.846290043 +0000 UTC m=+0.082088394) (total time: 30.001041211s):
Trace[1427131847]: [30.001041211s] [30.001041211s] END
E0316 18:12:18.848068 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0316 18:12:18.848103 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-16 18:11:48.845209621 +0000 UTC m=+0.081007992) (total time: 30.002393876s):
Trace[939984059]: [30.002393876s] [30.002393876s] END
E0316 18:12:18.848200 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
.:53
[INFO] plugin/reload: Running configuration MD5 = 0c3216a78d32f257fd8c644ead867e29
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] 127.0.0.1:35912 - 52941 "HINFO IN 7891349533246800731.8101106274944321035. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02827184s
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
==> describe nodes <==
Name: old-k8s-version-985498
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=old-k8s-version-985498
kubernetes.io/os=linux
minikube.k8s.io/commit=dcb7bcec19ba52ac09364e1139fb2071215a1bc6
minikube.k8s.io/name=old-k8s-version-985498
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_03_16T18_07_16_0700
minikube.k8s.io/version=v1.32.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 16 Mar 2024 18:07:11 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-985498
AcquireTime: <unset>
RenewTime: Sat, 16 Mar 2024 18:17:34 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 16 Mar 2024 18:13:03 +0000 Sat, 16 Mar 2024 18:07:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 16 Mar 2024 18:13:03 +0000 Sat, 16 Mar 2024 18:07:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 16 Mar 2024 18:13:03 +0000 Sat, 16 Mar 2024 18:07:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 16 Mar 2024 18:13:03 +0000 Sat, 16 Mar 2024 18:11:37 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.61.233
Hostname: old-k8s-version-985498
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 4104b957f1564a25a1b06b701038e2d3
System UUID: 4104b957-f156-4a25-a1b0-6b701038e2d3
Boot ID: f0635e75-e914-462e-b0f6-4dfb2f2adbc1
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.14
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m7s
kube-system coredns-74ff55c5b-p8874 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 10m
kube-system etcd-old-k8s-version-985498 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 10m
kube-system kube-apiserver-old-k8s-version-985498 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m
kube-system kube-controller-manager-old-k8s-version-985498 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m
kube-system kube-proxy-nvd4k 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m
kube-system kube-scheduler-old-k8s-version-985498 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m
kube-system metrics-server-9975d5f86-xqhk9 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (9%!)(MISSING) 0 (0%!)(MISSING) 8m56s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-sztdk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m8s
kubernetes-dashboard kubernetes-dashboard-cd95d586-656nk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m8s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING)
memory 370Mi (17%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 10m (x4 over 10m) kubelet Node old-k8s-version-985498 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m (x3 over 10m) kubelet Node old-k8s-version-985498 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10m (x4 over 10m) kubelet Node old-k8s-version-985498 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods
Normal Starting 10m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 10m kubelet Node old-k8s-version-985498 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m kubelet Node old-k8s-version-985498 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10m kubelet Node old-k8s-version-985498 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 10m kubelet Node old-k8s-version-985498 status is now: NodeReady
Normal Starting 10m kube-proxy Starting kube-proxy.
Normal Starting 6m39s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m39s (x9 over 6m39s) kubelet Node old-k8s-version-985498 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m39s (x7 over 6m39s) kubelet Node old-k8s-version-985498 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m39s (x7 over 6m39s) kubelet Node old-k8s-version-985498 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m39s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m49s kube-proxy Starting kube-proxy.
==> dmesg <==
[ +4.774322] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.740197] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +1.787539] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.588929] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
[ +0.075287] kauditd_printk_skb: 1 callbacks suppressed
[ +0.075824] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
[ +0.201911] systemd-fstab-generator[514]: Ignoring "noauto" option for root device
[ +0.131486] systemd-fstab-generator[526]: Ignoring "noauto" option for root device
[ +0.379126] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
[ +6.884463] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
[ +0.070466] kauditd_printk_skb: 158 callbacks suppressed
[ +2.694758] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
[ +2.245908] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
[ +0.061702] kauditd_printk_skb: 46 callbacks suppressed
[ +5.534369] kauditd_printk_skb: 18 callbacks suppressed
[Mar16 18:11] kauditd_printk_skb: 26 callbacks suppressed
[ +21.757416] kauditd_printk_skb: 6 callbacks suppressed
[ +2.035387] systemd-fstab-generator[1480]: Ignoring "noauto" option for root device
[ +12.185563] kauditd_printk_skb: 32 callbacks suppressed
[Mar16 18:12] kauditd_printk_skb: 31 callbacks suppressed
[ +12.054497] kauditd_printk_skb: 6 callbacks suppressed
[ +20.479846] kauditd_printk_skb: 14 callbacks suppressed
[ +5.494496] kauditd_printk_skb: 4 callbacks suppressed
==> etcd [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9] <==
2024-03-16 18:13:33.611075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:13:43.611098 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:13:53.611353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:14:03.612320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:14:13.611743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:14:23.611562 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:14:33.610786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:14:43.611174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:14:53.611080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:15:03.610994 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:15:13.611341 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:15:23.611142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:15:33.611534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:15:43.611071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:15:53.611086 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:16:03.612267 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:16:13.611102 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:16:23.611592 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:16:33.611109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:16:43.611113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:16:53.610985 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:17:03.611530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:17:13.610943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:17:23.610883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-03-16 18:17:33.611175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
18:17:38 up 7 min, 0 users, load average: 0.05, 0.27, 0.17
Linux old-k8s-version-985498 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0316 18:14:34.724231 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0316 18:14:41.642110 1 client.go:360] parsed scheme: "passthrough"
I0316 18:14:41.642485 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0316 18:14:41.642587 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0316 18:15:21.536978 1 client.go:360] parsed scheme: "passthrough"
I0316 18:15:21.537082 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0316 18:15:21.537094 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0316 18:16:06.282030 1 client.go:360] parsed scheme: "passthrough"
I0316 18:16:06.282110 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0316 18:16:06.282124 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0316 18:16:33.937530 1 handler_proxy.go:102] no RequestInfo found in the context
E0316 18:16:33.937824 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0316 18:16:33.937858 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0316 18:16:46.794221 1 client.go:360] parsed scheme: "passthrough"
I0316 18:16:46.794433 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0316 18:16:46.794968 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0316 18:17:27.584956 1 client.go:360] parsed scheme: "passthrough"
I0316 18:17:27.585326 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0316 18:17:27.585516 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0316 18:17:33.938437 1 handler_proxy.go:102] no RequestInfo found in the context
E0316 18:17:33.938563 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0316 18:17:33.938579 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554] <==
E0316 18:13:33.527464 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0316 18:13:37.901958 1 request.go:655] Throttling request took 1.048886779s, request: GET:https://192.168.61.233:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W0316 18:13:38.752975 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0316 18:14:04.030343 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0316 18:14:10.404169 1 request.go:655] Throttling request took 1.046896196s, request: GET:https://192.168.61.233:8443/apis/policy/v1beta1?timeout=32s
W0316 18:14:11.256109 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0316 18:14:34.533459 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0316 18:14:42.907200 1 request.go:655] Throttling request took 1.047556849s, request: GET:https://192.168.61.233:8443/apis/extensions/v1beta1?timeout=32s
W0316 18:14:43.761205 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0316 18:15:05.036216 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0316 18:15:15.412308 1 request.go:655] Throttling request took 1.047716137s, request: GET:https://192.168.61.233:8443/apis/extensions/v1beta1?timeout=32s
W0316 18:15:16.264074 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0316 18:15:35.539966 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0316 18:15:47.914855 1 request.go:655] Throttling request took 1.048066792s, request: GET:https://192.168.61.233:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W0316 18:15:48.766840 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0316 18:16:06.042841 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0316 18:16:20.417848 1 request.go:655] Throttling request took 1.048473373s, request: GET:https://192.168.61.233:8443/apis/events.k8s.io/v1beta1?timeout=32s
W0316 18:16:21.270089 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0316 18:16:36.545342 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0316 18:16:52.920918 1 request.go:655] Throttling request took 1.048141635s, request: GET:https://192.168.61.233:8443/apis/node.k8s.io/v1?timeout=32s
W0316 18:16:53.772438 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0316 18:17:07.048102 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0316 18:17:25.423094 1 request.go:655] Throttling request took 1.04771719s, request: GET:https://192.168.61.233:8443/apis/extensions/v1beta1?timeout=32s
W0316 18:17:26.275068 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0316 18:17:37.551507 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-controller-manager [162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72] <==
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:171 +0x28b
goroutine 145 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0010b6020, 0xc0010a20d0, 0xc00009cf60, 0x0, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc0010a20d0, 0xc00009c0c0, 0x0, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc0010a20d0, 0xc00009c0c0, 0x0, 0x4764ec8)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:174 +0x2f9
goroutine 146 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc00009c0c0, 0xc0010a20f0, 0x4e0fa60, 0xc0001261c0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c
goroutine 147 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00009d0e0, 0xdf8475800, 0x0, 0xc00009d020)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c
==> kube-proxy [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd] <==
I0316 18:07:32.515621 1 node.go:172] Successfully retrieved node IP: 192.168.61.233
I0316 18:07:32.515775 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.61.233), assume IPv4 operation
W0316 18:07:32.609914 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0316 18:07:32.610139 1 server_others.go:185] Using iptables Proxier.
I0316 18:07:32.612161 1 server.go:650] Version: v1.20.0
I0316 18:07:32.621168 1 config.go:315] Starting service config controller
I0316 18:07:32.621254 1 shared_informer.go:240] Waiting for caches to sync for service config
I0316 18:07:32.621463 1 config.go:224] Starting endpoint slice config controller
I0316 18:07:32.621477 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0316 18:07:32.721776 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0316 18:07:32.722054 1 shared_informer.go:247] Caches are synced for service config
I0316 18:11:49.870050 1 node.go:172] Successfully retrieved node IP: 192.168.61.233
I0316 18:11:49.870121 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.61.233), assume IPv4 operation
W0316 18:11:49.900204 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0316 18:11:49.900354 1 server_others.go:185] Using iptables Proxier.
I0316 18:11:49.902013 1 server.go:650] Version: v1.20.0
I0316 18:11:49.905480 1 config.go:315] Starting service config controller
I0316 18:11:49.905533 1 shared_informer.go:240] Waiting for caches to sync for service config
I0316 18:11:49.905564 1 config.go:224] Starting endpoint slice config controller
I0316 18:11:49.905568 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0316 18:11:50.005949 1 shared_informer.go:247] Caches are synced for service config
I0316 18:11:50.006518 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c] <==
E0316 18:11:08.498529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.233:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:08.666954 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.233:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:08.782890 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.233:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:09.459992 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.61.233:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:10.277532 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.233:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:10.304574 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.233:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:10.324342 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.233:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:10.612240 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.233:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:10.947874 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.233:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:15.289892 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.233:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:17.854143 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.233:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:18.226347 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.233:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:18.279020 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.61.233:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:18.881823 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.233:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:19.281888 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.61.233:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:19.570252 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.233:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:19.649468 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.233:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:19.687041 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.233:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:19.863870 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.61.233:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:20.039168 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.233:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:20.124690 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.233:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
E0316 18:11:32.868473 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0316 18:11:32.872744 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0316 18:11:32.872894 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0316 18:12:09.082486 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 16 18:16:03 old-k8s-version-985498 kubelet[888]: E0316 18:16:03.347044 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:16:06 old-k8s-version-985498 kubelet[888]: I0316 18:16:06.688609 888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
Mar 16 18:16:06 old-k8s-version-985498 kubelet[888]: E0316 18:16:06.689281 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:16:17 old-k8s-version-985498 kubelet[888]: E0316 18:16:17.347194 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:16:19 old-k8s-version-985498 kubelet[888]: I0316 18:16:19.346181 888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
Mar 16 18:16:19 old-k8s-version-985498 kubelet[888]: E0316 18:16:19.346699 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:16:30 old-k8s-version-985498 kubelet[888]: E0316 18:16:30.348163 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:16:34 old-k8s-version-985498 kubelet[888]: I0316 18:16:34.345859 888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
Mar 16 18:16:34 old-k8s-version-985498 kubelet[888]: E0316 18:16:34.346242 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:16:41 old-k8s-version-985498 kubelet[888]: E0316 18:16:41.347306 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:16:49 old-k8s-version-985498 kubelet[888]: I0316 18:16:49.346632 888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
Mar 16 18:16:49 old-k8s-version-985498 kubelet[888]: E0316 18:16:49.347088 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: I0316 18:17:01.345945 888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.361682 888 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362146 888 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362691 888 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-9vszw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa
2-191f-4ae2-8aee-b1075a50b37b): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: I0316 18:17:16.346245 888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 16 18:17:30 old-k8s-version-985498 kubelet[888]: I0316 18:17:30.346010 888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
Mar 16 18:17:30 old-k8s-version-985498 kubelet[888]: E0316 18:17:30.346307 888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
Mar 16 18:17:35 old-k8s-version-985498 kubelet[888]: E0316 18:17:35.347054 888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3] <==
2024/03/16 18:12:42 Using namespace: kubernetes-dashboard
2024/03/16 18:12:42 Using in-cluster config to connect to apiserver
2024/03/16 18:12:42 Using secret token for csrf signing
2024/03/16 18:12:42 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/03/16 18:12:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/03/16 18:12:42 Successful initial request to the apiserver, version: v1.20.0
2024/03/16 18:12:42 Generating JWE encryption key
2024/03/16 18:12:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/03/16 18:12:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/03/16 18:12:42 Initializing JWE encryption key from synchronized object
2024/03/16 18:12:42 Creating in-cluster Sidecar client
2024/03/16 18:12:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:12:42 Serving insecurely on HTTP port: 9090
2024/03/16 18:13:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:13:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:14:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:14:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:15:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:15:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:16:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:16:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:17:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/03/16 18:12:42 Starting overwatch
==> storage-provisioner [7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6] <==
I0316 18:07:33.757039 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0316 18:07:33.776555 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0316 18:07:33.777196 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0316 18:07:33.790129 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0316 18:07:33.790801 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e782b13f-3eff-4a6c-92ef-a1c6f20af052", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-985498_88ac7e1c-8aa3-4db6-952b-13e965f374da became leader
I0316 18:07:33.791280 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-985498_88ac7e1c-8aa3-4db6-952b-13e965f374da!
I0316 18:07:33.895908 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-985498_88ac7e1c-8aa3-4db6-952b-13e965f374da!
I0316 18:11:34.660606 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0316 18:12:04.672548 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8] <==
I0316 18:12:05.757843 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0316 18:12:05.778709 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0316 18:12:05.779146 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0316 18:12:23.216321 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0316 18:12:23.217470 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e782b13f-3eff-4a6c-92ef-a1c6f20af052", APIVersion:"v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-985498_471419ea-4639-4cae-8958-0884def8dfa9 became leader
I0316 18:12:23.220625 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-985498_471419ea-4639-4cae-8958-0884def8dfa9!
I0316 18:12:23.322608 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-985498_471419ea-4639-4cae-8958-0884def8dfa9!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-985498 -n old-k8s-version-985498
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-985498 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-xqhk9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-985498 describe pod metrics-server-9975d5f86-xqhk9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-985498 describe pod metrics-server-9975d5f86-xqhk9: exit status 1 (71.933833ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-xqhk9" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-985498 describe pod metrics-server-9975d5f86-xqhk9: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (445.56s)