=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-041199 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E0224 13:21:04.828068 718730 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/functional-399405/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-041199 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.127766032s)
-- stdout --
* [old-k8s-version-041199] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20451
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20451-713351/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-713351/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-041199" primary control-plane node in "old-k8s-version-041199" cluster
* Pulling base image v0.0.46-1740046583-20436 ...
* Restarting existing docker container for "old-k8s-version-041199" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
* Verifying Kubernetes components...
- Using image registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-041199 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
-- /stdout --
** stderr **
I0224 13:20:06.920965 927252 out.go:345] Setting OutFile to fd 1 ...
I0224 13:20:06.921219 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 13:20:06.921247 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:20:06.921267 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 13:20:06.921550 927252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-713351/.minikube/bin
I0224 13:20:06.921973 927252 out.go:352] Setting JSON to false
I0224 13:20:06.923085 927252 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14555,"bootTime":1740388652,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0224 13:20:06.923191 927252 start.go:139] virtualization:
I0224 13:20:06.928840 927252 out.go:177] * [old-k8s-version-041199] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0224 13:20:06.933761 927252 out.go:177] - MINIKUBE_LOCATION=20451
I0224 13:20:06.935473 927252 notify.go:220] Checking for updates...
I0224 13:20:06.940415 927252 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0224 13:20:06.943574 927252 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20451-713351/kubeconfig
I0224 13:20:06.946800 927252 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-713351/.minikube
I0224 13:20:06.950681 927252 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0224 13:20:06.953743 927252 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0224 13:20:06.957403 927252 config.go:182] Loaded profile config "old-k8s-version-041199": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0224 13:20:06.961224 927252 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0224 13:20:06.964258 927252 driver.go:394] Setting default libvirt URI to qemu:///system
I0224 13:20:07.011476 927252 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
I0224 13:20:07.011604 927252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0224 13:20:07.105903 927252 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2025-02-24 13:20:07.096102664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
I0224 13:20:07.106012 927252 docker.go:318] overlay module found
I0224 13:20:07.113319 927252 out.go:177] * Using the docker driver based on existing profile
I0224 13:20:07.116717 927252 start.go:297] selected driver: docker
I0224 13:20:07.116741 927252 start.go:901] validating driver "docker" against &{Name:old-k8s-version-041199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-041199 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0224 13:20:07.116855 927252 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0224 13:20:07.117548 927252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0224 13:20:07.204077 927252 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2025-02-24 13:20:07.194936819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
I0224 13:20:07.204512 927252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0224 13:20:07.204532 927252 cni.go:84] Creating CNI manager for ""
I0224 13:20:07.204571 927252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0224 13:20:07.204614 927252 start.go:340] cluster config:
{Name:old-k8s-version-041199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-041199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0224 13:20:07.209973 927252 out.go:177] * Starting "old-k8s-version-041199" primary control-plane node in "old-k8s-version-041199" cluster
I0224 13:20:07.213509 927252 cache.go:121] Beginning downloading kic base image for docker with containerd
I0224 13:20:07.217229 927252 out.go:177] * Pulling base image v0.0.46-1740046583-20436 ...
I0224 13:20:07.221167 927252 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0224 13:20:07.221224 927252 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-713351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0224 13:20:07.221237 927252 cache.go:56] Caching tarball of preloaded images
I0224 13:20:07.221327 927252 preload.go:172] Found /home/jenkins/minikube-integration/20451-713351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0224 13:20:07.221342 927252 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0224 13:20:07.221465 927252 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/config.json ...
I0224 13:20:07.221779 927252 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local docker daemon
I0224 13:20:07.242873 927252 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local docker daemon, skipping pull
I0224 13:20:07.242895 927252 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 exists in daemon, skipping load
I0224 13:20:07.242909 927252 cache.go:230] Successfully downloaded all kic artifacts
I0224 13:20:07.242942 927252 start.go:360] acquireMachinesLock for old-k8s-version-041199: {Name:mk6244cd6a8a1d2ec4fdc814875a90843ce0e46c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:20:07.242993 927252 start.go:364] duration metric: took 33.189µs to acquireMachinesLock for "old-k8s-version-041199"
I0224 13:20:07.243012 927252 start.go:96] Skipping create...Using existing machine configuration
I0224 13:20:07.243017 927252 fix.go:54] fixHost starting:
I0224 13:20:07.243278 927252 cli_runner.go:164] Run: docker container inspect old-k8s-version-041199 --format={{.State.Status}}
I0224 13:20:07.261684 927252 fix.go:112] recreateIfNeeded on old-k8s-version-041199: state=Stopped err=<nil>
W0224 13:20:07.261719 927252 fix.go:138] unexpected machine state, will restart: <nil>
I0224 13:20:07.264975 927252 out.go:177] * Restarting existing docker container for "old-k8s-version-041199" ...
I0224 13:20:07.267988 927252 cli_runner.go:164] Run: docker start old-k8s-version-041199
I0224 13:20:07.598509 927252 cli_runner.go:164] Run: docker container inspect old-k8s-version-041199 --format={{.State.Status}}
I0224 13:20:07.626066 927252 kic.go:430] container "old-k8s-version-041199" state is running.
I0224 13:20:07.626544 927252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-041199
I0224 13:20:07.662152 927252 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/config.json ...
I0224 13:20:07.662408 927252 machine.go:93] provisionDockerMachine start ...
I0224 13:20:07.662481 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:07.693341 927252 main.go:141] libmachine: Using SSH client type: native
I0224 13:20:07.693692 927252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0224 13:20:07.693703 927252 main.go:141] libmachine: About to run SSH command:
hostname
I0224 13:20:07.695885 927252 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0224 13:20:10.854578 927252 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-041199
I0224 13:20:10.854605 927252 ubuntu.go:169] provisioning hostname "old-k8s-version-041199"
I0224 13:20:10.854682 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:10.927743 927252 main.go:141] libmachine: Using SSH client type: native
I0224 13:20:10.928169 927252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0224 13:20:10.928185 927252 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-041199 && echo "old-k8s-version-041199" | sudo tee /etc/hostname
I0224 13:20:11.154808 927252 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-041199
I0224 13:20:11.154908 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:11.224447 927252 main.go:141] libmachine: Using SSH client type: native
I0224 13:20:11.224719 927252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0224 13:20:11.224741 927252 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-041199' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-041199/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-041199' | sudo tee -a /etc/hosts;
fi
fi
I0224 13:20:11.394644 927252 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 13:20:11.394669 927252 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20451-713351/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-713351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-713351/.minikube}
I0224 13:20:11.394694 927252 ubuntu.go:177] setting up certificates
I0224 13:20:11.394704 927252 provision.go:84] configureAuth start
I0224 13:20:11.394772 927252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-041199
I0224 13:20:11.419234 927252 provision.go:143] copyHostCerts
I0224 13:20:11.419299 927252 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-713351/.minikube/ca.pem, removing ...
I0224 13:20:11.419315 927252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-713351/.minikube/ca.pem
I0224 13:20:11.419389 927252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-713351/.minikube/ca.pem (1082 bytes)
I0224 13:20:11.419538 927252 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-713351/.minikube/cert.pem, removing ...
I0224 13:20:11.419543 927252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-713351/.minikube/cert.pem
I0224 13:20:11.419574 927252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-713351/.minikube/cert.pem (1123 bytes)
I0224 13:20:11.419631 927252 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-713351/.minikube/key.pem, removing ...
I0224 13:20:11.419637 927252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-713351/.minikube/key.pem
I0224 13:20:11.419661 927252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-713351/.minikube/key.pem (1679 bytes)
I0224 13:20:11.419709 927252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-713351/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-041199 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-041199]
I0224 13:20:12.131282 927252 provision.go:177] copyRemoteCerts
I0224 13:20:12.131356 927252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 13:20:12.131411 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:12.153612 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:12.251916 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0224 13:20:12.283500 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0224 13:20:12.338249 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0224 13:20:12.387549 927252 provision.go:87] duration metric: took 992.829698ms to configureAuth
I0224 13:20:12.387577 927252 ubuntu.go:193] setting minikube options for container-runtime
I0224 13:20:12.387784 927252 config.go:182] Loaded profile config "old-k8s-version-041199": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0224 13:20:12.387792 927252 machine.go:96] duration metric: took 4.725375809s to provisionDockerMachine
I0224 13:20:12.387800 927252 start.go:293] postStartSetup for "old-k8s-version-041199" (driver="docker")
I0224 13:20:12.387810 927252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 13:20:12.387857 927252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 13:20:12.387898 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:12.416339 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:12.515367 927252 ssh_runner.go:195] Run: cat /etc/os-release
I0224 13:20:12.518940 927252 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0224 13:20:12.518994 927252 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0224 13:20:12.519010 927252 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0224 13:20:12.519021 927252 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0224 13:20:12.519032 927252 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-713351/.minikube/addons for local assets ...
I0224 13:20:12.519090 927252 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-713351/.minikube/files for local assets ...
I0224 13:20:12.519175 927252 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-713351/.minikube/files/etc/ssl/certs/7187302.pem -> 7187302.pem in /etc/ssl/certs
I0224 13:20:12.519288 927252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 13:20:12.527851 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/files/etc/ssl/certs/7187302.pem --> /etc/ssl/certs/7187302.pem (1708 bytes)
I0224 13:20:12.551871 927252 start.go:296] duration metric: took 164.055039ms for postStartSetup
I0224 13:20:12.551957 927252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0224 13:20:12.552002 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:12.569834 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:12.662770 927252 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0224 13:20:12.667663 927252 fix.go:56] duration metric: took 5.424637406s for fixHost
I0224 13:20:12.667687 927252 start.go:83] releasing machines lock for "old-k8s-version-041199", held for 5.424685774s
I0224 13:20:12.667755 927252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-041199
I0224 13:20:12.687910 927252 ssh_runner.go:195] Run: cat /version.json
I0224 13:20:12.687973 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:12.688221 927252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0224 13:20:12.688271 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:12.715420 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:12.737091 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:12.944924 927252 ssh_runner.go:195] Run: systemctl --version
I0224 13:20:12.949409 927252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0224 13:20:12.953840 927252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0224 13:20:12.971424 927252 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0224 13:20:12.971505 927252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0224 13:20:12.980791 927252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0224 13:20:12.980812 927252 start.go:495] detecting cgroup driver to use...
I0224 13:20:12.980843 927252 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0224 13:20:12.980914 927252 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0224 13:20:12.995433 927252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 13:20:13.008723 927252 docker.go:217] disabling cri-docker service (if available) ...
I0224 13:20:13.008784 927252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0224 13:20:13.022865 927252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0224 13:20:13.035048 927252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0224 13:20:13.141171 927252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0224 13:20:13.248619 927252 docker.go:233] disabling docker service ...
I0224 13:20:13.248685 927252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0224 13:20:13.269913 927252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0224 13:20:13.291258 927252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0224 13:20:13.408191 927252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0224 13:20:13.511061 927252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0224 13:20:13.524688 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 13:20:13.543424 927252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0224 13:20:13.553879 927252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 13:20:13.564372 927252 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 13:20:13.564514 927252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 13:20:13.575220 927252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 13:20:13.585778 927252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 13:20:13.595714 927252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 13:20:13.605740 927252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 13:20:13.615154 927252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 13:20:13.624827 927252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 13:20:13.634031 927252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 13:20:13.647738 927252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 13:20:13.746431 927252 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 13:20:13.975358 927252 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0224 13:20:13.975477 927252 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0224 13:20:13.979651 927252 start.go:563] Will wait 60s for crictl version
I0224 13:20:13.979710 927252 ssh_runner.go:195] Run: which crictl
I0224 13:20:13.984152 927252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0224 13:20:14.060670 927252 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0224 13:20:14.060739 927252 ssh_runner.go:195] Run: containerd --version
I0224 13:20:14.096115 927252 ssh_runner.go:195] Run: containerd --version
I0224 13:20:14.130653 927252 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
I0224 13:20:14.133624 927252 cli_runner.go:164] Run: docker network inspect old-k8s-version-041199 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0224 13:20:14.155553 927252 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0224 13:20:14.159776 927252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 13:20:14.170483 927252 kubeadm.go:883] updating cluster {Name:old-k8s-version-041199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-041199 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0224 13:20:14.170613 927252 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0224 13:20:14.170676 927252 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 13:20:14.222330 927252 containerd.go:627] all images are preloaded for containerd runtime.
I0224 13:20:14.222357 927252 containerd.go:534] Images already preloaded, skipping extraction
I0224 13:20:14.222416 927252 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 13:20:14.270766 927252 containerd.go:627] all images are preloaded for containerd runtime.
I0224 13:20:14.270795 927252 cache_images.go:84] Images are preloaded, skipping loading
I0224 13:20:14.270804 927252 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0224 13:20:14.270975 927252 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-041199 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-041199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0224 13:20:14.271063 927252 ssh_runner.go:195] Run: sudo crictl info
I0224 13:20:14.320187 927252 cni.go:84] Creating CNI manager for ""
I0224 13:20:14.320208 927252 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0224 13:20:14.320217 927252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0224 13:20:14.320237 927252 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-041199 NodeName:old-k8s-version-041199 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0224 13:20:14.320365 927252 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-041199"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0224 13:20:14.320431 927252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0224 13:20:14.330131 927252 binaries.go:44] Found k8s binaries, skipping transfer
I0224 13:20:14.330200 927252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0224 13:20:14.345192 927252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0224 13:20:14.367456 927252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0224 13:20:14.388950 927252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0224 13:20:14.408622 927252 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0224 13:20:14.412281 927252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 13:20:14.424198 927252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 13:20:14.531838 927252 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0224 13:20:14.554406 927252 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199 for IP: 192.168.76.2
I0224 13:20:14.554476 927252 certs.go:194] generating shared ca certs ...
I0224 13:20:14.554507 927252 certs.go:226] acquiring lock for ca certs: {Name:mkc72ecc1d89fe0792bd08d20ea71860b678bc29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 13:20:14.554685 927252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-713351/.minikube/ca.key
I0224 13:20:14.554756 927252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-713351/.minikube/proxy-client-ca.key
I0224 13:20:14.554779 927252 certs.go:256] generating profile certs ...
I0224 13:20:14.554882 927252 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/client.key
I0224 13:20:14.554963 927252 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/apiserver.key.8f06a9d3
I0224 13:20:14.555021 927252 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/proxy-client.key
I0224 13:20:14.555164 927252 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/718730.pem (1338 bytes)
W0224 13:20:14.555221 927252 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-713351/.minikube/certs/718730_empty.pem, impossibly tiny 0 bytes
I0224 13:20:14.555252 927252 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca-key.pem (1675 bytes)
I0224 13:20:14.555304 927252 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem (1082 bytes)
I0224 13:20:14.555355 927252 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/cert.pem (1123 bytes)
I0224 13:20:14.555402 927252 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/key.pem (1679 bytes)
I0224 13:20:14.555472 927252 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/files/etc/ssl/certs/7187302.pem (1708 bytes)
I0224 13:20:14.556136 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0224 13:20:14.587829 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0224 13:20:14.611860 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0224 13:20:14.651272 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0224 13:20:14.678052 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0224 13:20:14.701769 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0224 13:20:14.725444 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0224 13:20:14.778300 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/old-k8s-version-041199/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0224 13:20:14.843235 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/files/etc/ssl/certs/7187302.pem --> /usr/share/ca-certificates/7187302.pem (1708 bytes)
I0224 13:20:14.869457 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0224 13:20:14.893884 927252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/certs/718730.pem --> /usr/share/ca-certificates/718730.pem (1338 bytes)
I0224 13:20:14.918767 927252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0224 13:20:14.938221 927252 ssh_runner.go:195] Run: openssl version
I0224 13:20:14.944210 927252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7187302.pem && ln -fs /usr/share/ca-certificates/7187302.pem /etc/ssl/certs/7187302.pem"
I0224 13:20:14.953689 927252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7187302.pem
I0224 13:20:14.957685 927252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:41 /usr/share/ca-certificates/7187302.pem
I0224 13:20:14.957792 927252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7187302.pem
I0224 13:20:14.965025 927252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7187302.pem /etc/ssl/certs/3ec20f2e.0"
I0224 13:20:14.973917 927252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0224 13:20:14.983260 927252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0224 13:20:14.987427 927252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:34 /usr/share/ca-certificates/minikubeCA.pem
I0224 13:20:14.987545 927252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0224 13:20:14.994836 927252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0224 13:20:15.004376 927252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/718730.pem && ln -fs /usr/share/ca-certificates/718730.pem /etc/ssl/certs/718730.pem"
I0224 13:20:15.016022 927252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/718730.pem
I0224 13:20:15.021007 927252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:41 /usr/share/ca-certificates/718730.pem
I0224 13:20:15.021130 927252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/718730.pem
I0224 13:20:15.030502 927252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/718730.pem /etc/ssl/certs/51391683.0"
I0224 13:20:15.040913 927252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0224 13:20:15.045714 927252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0224 13:20:15.054364 927252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0224 13:20:15.062766 927252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0224 13:20:15.071056 927252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0224 13:20:15.079236 927252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0224 13:20:15.087228 927252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0224 13:20:15.095316 927252 kubeadm.go:392] StartCluster: {Name:old-k8s-version-041199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-041199 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0224 13:20:15.095428 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0224 13:20:15.095541 927252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0224 13:20:15.145685 927252 cri.go:89] found id: "9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:20:15.145718 927252 cri.go:89] found id: "1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:20:15.145723 927252 cri.go:89] found id: "d92ecb1f0a76cd642357b3da95e754106e193ba37668a3bcca987ff8e086048b"
I0224 13:20:15.145726 927252 cri.go:89] found id: "bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:20:15.145754 927252 cri.go:89] found id: "f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:20:15.145764 927252 cri.go:89] found id: "46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:20:15.145768 927252 cri.go:89] found id: "f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:20:15.145771 927252 cri.go:89] found id: "a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:20:15.145774 927252 cri.go:89] found id: ""
I0224 13:20:15.145849 927252 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0224 13:20:15.159108 927252 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-02-24T13:20:15Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0224 13:20:15.159229 927252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0224 13:20:15.168921 927252 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0224 13:20:15.168998 927252 kubeadm.go:593] restartPrimaryControlPlane start ...
I0224 13:20:15.169097 927252 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0224 13:20:15.178134 927252 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0224 13:20:15.178665 927252 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-041199" does not appear in /home/jenkins/minikube-integration/20451-713351/kubeconfig
I0224 13:20:15.178813 927252 kubeconfig.go:62] /home/jenkins/minikube-integration/20451-713351/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-041199" cluster setting kubeconfig missing "old-k8s-version-041199" context setting]
I0224 13:20:15.179203 927252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-713351/kubeconfig: {Name:mk2d402ee8f3936e3ec334c56d05ef6059f3cb5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 13:20:15.180833 927252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0224 13:20:15.189979 927252 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0224 13:20:15.190039 927252 kubeadm.go:597] duration metric: took 21.018691ms to restartPrimaryControlPlane
I0224 13:20:15.190055 927252 kubeadm.go:394] duration metric: took 94.753579ms to StartCluster
I0224 13:20:15.190070 927252 settings.go:142] acquiring lock: {Name:mk595fc9ff86cccbad8dd75071531f844958cc25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 13:20:15.190173 927252 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20451-713351/kubeconfig
I0224 13:20:15.190937 927252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-713351/kubeconfig: {Name:mk2d402ee8f3936e3ec334c56d05ef6059f3cb5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 13:20:15.191195 927252 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0224 13:20:15.191639 927252 config.go:182] Loaded profile config "old-k8s-version-041199": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0224 13:20:15.191602 927252 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0224 13:20:15.191683 927252 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-041199"
I0224 13:20:15.191694 927252 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-041199"
I0224 13:20:15.191698 927252 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-041199"
W0224 13:20:15.191705 927252 addons.go:247] addon storage-provisioner should already be in state true
I0224 13:20:15.191711 927252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-041199"
I0224 13:20:15.191740 927252 host.go:66] Checking if "old-k8s-version-041199" exists ...
I0224 13:20:15.192184 927252 cli_runner.go:164] Run: docker container inspect old-k8s-version-041199 --format={{.State.Status}}
I0224 13:20:15.192188 927252 cli_runner.go:164] Run: docker container inspect old-k8s-version-041199 --format={{.State.Status}}
I0224 13:20:15.192712 927252 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-041199"
I0224 13:20:15.192737 927252 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-041199"
W0224 13:20:15.192744 927252 addons.go:247] addon metrics-server should already be in state true
I0224 13:20:15.192779 927252 host.go:66] Checking if "old-k8s-version-041199" exists ...
I0224 13:20:15.193270 927252 cli_runner.go:164] Run: docker container inspect old-k8s-version-041199 --format={{.State.Status}}
I0224 13:20:15.195366 927252 addons.go:69] Setting dashboard=true in profile "old-k8s-version-041199"
I0224 13:20:15.195396 927252 addons.go:238] Setting addon dashboard=true in "old-k8s-version-041199"
W0224 13:20:15.195404 927252 addons.go:247] addon dashboard should already be in state true
I0224 13:20:15.195439 927252 host.go:66] Checking if "old-k8s-version-041199" exists ...
I0224 13:20:15.195895 927252 cli_runner.go:164] Run: docker container inspect old-k8s-version-041199 --format={{.State.Status}}
I0224 13:20:15.197521 927252 out.go:177] * Verifying Kubernetes components...
I0224 13:20:15.200806 927252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 13:20:15.267947 927252 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0224 13:20:15.268082 927252 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0224 13:20:15.272019 927252 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0224 13:20:15.272269 927252 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0224 13:20:15.272283 927252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0224 13:20:15.272351 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:15.276316 927252 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-041199"
W0224 13:20:15.276336 927252 addons.go:247] addon default-storageclass should already be in state true
I0224 13:20:15.276360 927252 host.go:66] Checking if "old-k8s-version-041199" exists ...
I0224 13:20:15.276777 927252 cli_runner.go:164] Run: docker container inspect old-k8s-version-041199 --format={{.State.Status}}
I0224 13:20:15.281404 927252 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0224 13:20:15.284299 927252 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0224 13:20:15.284334 927252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0224 13:20:15.284405 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:15.297739 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0224 13:20:15.297768 927252 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0224 13:20:15.297837 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:15.323268 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:15.337786 927252 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0224 13:20:15.337809 927252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0224 13:20:15.337883 927252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-041199
I0224 13:20:15.339264 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:15.368453 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:15.389813 927252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/old-k8s-version-041199/id_rsa Username:docker}
I0224 13:20:15.419516 927252 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0224 13:20:15.455307 927252 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-041199" to be "Ready" ...
I0224 13:20:15.586970 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0224 13:20:15.587034 927252 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0224 13:20:15.598708 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0224 13:20:15.624999 927252 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0224 13:20:15.625071 927252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0224 13:20:15.652167 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0224 13:20:15.652244 927252 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0224 13:20:15.682119 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:20:15.695386 927252 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0224 13:20:15.695461 927252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0224 13:20:15.794091 927252 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0224 13:20:15.794171 927252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0224 13:20:15.797237 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0224 13:20:15.797300 927252 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
W0224 13:20:15.839309 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:15.839424 927252 retry.go:31] will retry after 263.761431ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:15.895639 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0224 13:20:15.898870 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0224 13:20:15.898933 927252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W0224 13:20:15.936641 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:15.936721 927252 retry.go:31] will retry after 231.042449ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:15.989446 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0224 13:20:15.989469 927252 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0224 13:20:16.064347 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0224 13:20:16.064380 927252 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
W0224 13:20:16.068973 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.069003 927252 retry.go:31] will retry after 338.089662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.088815 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0224 13:20:16.088839 927252 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0224 13:20:16.104077 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0224 13:20:16.109727 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0224 13:20:16.109794 927252 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0224 13:20:16.167918 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:20:16.195895 927252 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0224 13:20:16.195969 927252 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
W0224 13:20:16.245959 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.246037 927252 retry.go:31] will retry after 544.765242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.303404 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0224 13:20:16.376911 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.376996 927252 retry.go:31] will retry after 285.759286ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.407288 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0224 13:20:16.474650 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.474748 927252 retry.go:31] will retry after 356.549059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:16.546358 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.546453 927252 retry.go:31] will retry after 441.9144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.663821 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0224 13:20:16.763685 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.763735 927252 retry.go:31] will retry after 670.384108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.791870 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0224 13:20:16.832446 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0224 13:20:16.928787 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.928845 927252 retry.go:31] will retry after 617.764895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:16.985632 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.985680 927252 retry.go:31] will retry after 415.357076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:16.988964 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0224 13:20:17.095159 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:17.095206 927252 retry.go:31] will retry after 303.567778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:17.399772 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0224 13:20:17.401482 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0224 13:20:17.435087 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:20:17.456361 927252 node_ready.go:53] error getting node "old-k8s-version-041199": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-041199": dial tcp 192.168.76.2:8443: connect: connection refused
I0224 13:20:17.547574 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0224 13:20:17.805104 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:17.805191 927252 retry.go:31] will retry after 874.315457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:17.805137 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:17.805236 927252 retry.go:31] will retry after 462.712931ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:17.886751 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:17.886795 927252 retry.go:31] will retry after 739.776897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:17.919466 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:17.919511 927252 retry.go:31] will retry after 1.136913001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:18.268807 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0224 13:20:18.373625 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:18.373676 927252 retry.go:31] will retry after 1.097341026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:18.627706 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:20:18.680167 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0224 13:20:18.732142 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:18.732180 927252 retry.go:31] will retry after 1.281802993s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:18.854015 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:18.854058 927252 retry.go:31] will retry after 1.706597946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:19.057033 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0224 13:20:19.146500 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:19.146535 927252 retry.go:31] will retry after 1.511346812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:19.456405 927252 node_ready.go:53] error getting node "old-k8s-version-041199": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-041199": dial tcp 192.168.76.2:8443: connect: connection refused
I0224 13:20:19.471768 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0224 13:20:19.606969 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:19.607007 927252 retry.go:31] will retry after 1.397681499s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:20.015325 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0224 13:20:20.117801 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:20.117837 927252 retry.go:31] will retry after 2.362989465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:20.561126 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0224 13:20:20.658799 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0224 13:20:20.670917 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:20.670956 927252 retry.go:31] will retry after 2.162902585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:20.776495 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:20.776581 927252 retry.go:31] will retry after 1.740758801s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:21.005473 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0224 13:20:21.095938 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:21.095977 927252 retry.go:31] will retry after 1.800153514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:21.955965 927252 node_ready.go:53] error getting node "old-k8s-version-041199": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-041199": dial tcp 192.168.76.2:8443: connect: connection refused
I0224 13:20:22.481239 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:20:22.517773 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0224 13:20:22.593026 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:22.593057 927252 retry.go:31] will retry after 2.182601352s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:22.664496 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:22.664527 927252 retry.go:31] will retry after 2.736852381s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:22.834762 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0224 13:20:22.897138 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0224 13:20:22.942347 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:22.942385 927252 retry.go:31] will retry after 2.367331871s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0224 13:20:23.028371 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:23.028408 927252 retry.go:31] will retry after 3.552052989s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0224 13:20:24.456340 927252 node_ready.go:53] error getting node "old-k8s-version-041199": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-041199": dial tcp 192.168.76.2:8443: connect: connection refused
I0224 13:20:24.775837 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:20:25.310473 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0224 13:20:25.401804 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0224 13:20:26.580693 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0224 13:20:34.958750 927252 node_ready.go:53] error getting node "old-k8s-version-041199": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-041199": net/http: TLS handshake timeout
I0224 13:20:35.145286 927252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.369406522s)
W0224 13:20:35.145321 927252 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0224 13:20:35.145340 927252 retry.go:31] will retry after 6.267075673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0224 13:20:36.105024 927252 node_ready.go:49] node "old-k8s-version-041199" has status "Ready":"True"
I0224 13:20:36.105047 927252 node_ready.go:38] duration metric: took 20.649695138s for node "old-k8s-version-041199" to be "Ready" ...
I0224 13:20:36.105057 927252 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 13:20:36.322272 927252 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-9947z" in "kube-system" namespace to be "Ready" ...
I0224 13:20:38.339126 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:39.155086 927252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.753241793s)
I0224 13:20:39.155201 927252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.844694429s)
I0224 13:20:39.155247 927252 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-041199"
I0224 13:20:39.237123 927252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.65637239s)
I0224 13:20:39.240468 927252 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-041199 addons enable metrics-server
I0224 13:20:40.829040 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:41.412544 927252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:20:42.251134 927252 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I0224 13:20:42.254261 927252 addons.go:514] duration metric: took 27.062659242s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I0224 13:20:43.330833 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:45.827776 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:47.828783 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:49.828868 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:52.327966 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:54.828184 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:56.829351 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:20:59.328979 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:01.330045 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:03.831725 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:06.327221 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:08.828652 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:11.327446 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:13.327723 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:15.828343 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:18.327038 927252 pod_ready.go:103] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:20.830324 927252 pod_ready.go:93] pod "coredns-74ff55c5b-9947z" in "kube-system" namespace has status "Ready":"True"
I0224 13:21:20.830350 927252 pod_ready.go:82] duration metric: took 44.508037459s for pod "coredns-74ff55c5b-9947z" in "kube-system" namespace to be "Ready" ...
I0224 13:21:20.830361 927252 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:21:22.845290 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:25.339987 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:27.835070 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:29.836102 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:32.335359 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:34.336416 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:36.354260 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:38.836592 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:40.837324 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:43.337000 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:45.837011 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:47.837046 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:50.384479 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:52.836991 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:55.335519 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:57.336789 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:59.839156 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:01.336376 927252 pod_ready.go:93] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.336412 927252 pod_ready.go:82] duration metric: took 40.506040441s for pod "etcd-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.336431 927252 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.343183 927252 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.343208 927252 pod_ready.go:82] duration metric: took 6.768835ms for pod "kube-apiserver-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.343225 927252 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.349177 927252 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.349208 927252 pod_ready.go:82] duration metric: took 5.973237ms for pod "kube-controller-manager-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.349223 927252 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gxpjd" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.357462 927252 pod_ready.go:93] pod "kube-proxy-gxpjd" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.357506 927252 pod_ready.go:82] duration metric: took 8.273633ms for pod "kube-proxy-gxpjd" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.357522 927252 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.363894 927252 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.363928 927252 pod_ready.go:82] duration metric: took 6.387118ms for pod "kube-scheduler-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.363947 927252 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace to be "Ready" ...
I0224 13:22:03.369160 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:05.869746 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:08.370036 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:10.370090 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:12.869259 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:14.870040 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:17.368856 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:19.369667 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:21.869388 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:23.870282 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:25.870514 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:28.369257 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:30.369720 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:32.869276 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:34.870645 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:37.369147 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:39.369527 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:41.370689 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:43.869480 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:45.869526 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:47.870772 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:50.369084 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:52.868934 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:54.869801 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:56.869906 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:59.368718 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:01.370114 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:03.868583 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:05.869283 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:08.368952 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:10.369480 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:12.868948 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:14.869955 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:17.369024 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:19.869202 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:21.870114 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:24.368988 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:26.369648 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:28.870121 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:31.369158 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:33.378119 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:35.869876 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:38.369849 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:40.870042 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:43.369324 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:45.870225 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:47.871060 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:50.369017 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:52.369152 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:54.372968 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:56.374098 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:58.868892 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:00.869797 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:03.370080 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:05.869495 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:08.369948 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:10.869710 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:12.869743 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:15.368318 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:17.368770 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:19.869523 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:21.871108 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:24.369568 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:26.869203 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:29.368644 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:31.369383 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:33.870123 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:36.368830 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:38.369278 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:40.869261 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:42.869419 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:44.870084 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:47.369221 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:49.371098 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:51.870677 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:53.872634 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:56.370227 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:58.870140 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:01.374446 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:03.870621 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:06.370145 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:08.869312 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:10.869392 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:13.369247 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:15.369393 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:17.869827 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:19.870381 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:21.870670 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:24.368689 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:26.369755 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:28.425070 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:30.869733 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:32.870199 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:34.870900 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:37.369060 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:39.369731 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:41.871019 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:43.872810 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:46.369337 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:48.369412 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:50.369572 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:52.869501 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:54.870077 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:56.870133 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:59.372678 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:26:01.374544 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:26:01.374583 927252 pod_ready.go:82] duration metric: took 4m0.010627786s for pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace to be "Ready" ...
E0224 13:26:01.374597 927252 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0224 13:26:01.374606 927252 pod_ready.go:39] duration metric: took 5m25.269530815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 13:26:01.374625 927252 api_server.go:52] waiting for apiserver process to appear ...
I0224 13:26:01.374679 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 13:26:01.374755 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 13:26:01.456344 927252 cri.go:89] found id: "43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a"
I0224 13:26:01.456369 927252 cri.go:89] found id: "a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:26:01.456374 927252 cri.go:89] found id: ""
I0224 13:26:01.456382 927252 logs.go:282] 2 containers: [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa]
I0224 13:26:01.456444 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.460731 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.464631 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 13:26:01.464708 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 13:26:01.515782 927252 cri.go:89] found id: "ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf"
I0224 13:26:01.515811 927252 cri.go:89] found id: "f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:26:01.515817 927252 cri.go:89] found id: ""
I0224 13:26:01.515826 927252 logs.go:282] 2 containers: [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8]
I0224 13:26:01.515906 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.520410 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.526491 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 13:26:01.526730 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 13:26:01.573251 927252 cri.go:89] found id: "911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c"
I0224 13:26:01.573276 927252 cri.go:89] found id: "9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:26:01.573281 927252 cri.go:89] found id: ""
I0224 13:26:01.573290 927252 logs.go:282] 2 containers: [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c 9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5]
I0224 13:26:01.573383 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.577833 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.581941 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 13:26:01.582076 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 13:26:01.631487 927252 cri.go:89] found id: "e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd"
I0224 13:26:01.631509 927252 cri.go:89] found id: "46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:26:01.631514 927252 cri.go:89] found id: ""
I0224 13:26:01.631522 927252 logs.go:282] 2 containers: [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd 46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73]
I0224 13:26:01.631612 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.635833 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.641445 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 13:26:01.641684 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 13:26:01.684531 927252 cri.go:89] found id: "d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01"
I0224 13:26:01.684578 927252 cri.go:89] found id: "bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:26:01.684586 927252 cri.go:89] found id: ""
I0224 13:26:01.684594 927252 logs.go:282] 2 containers: [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01 bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3]
I0224 13:26:01.684663 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.689913 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.695230 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 13:26:01.695367 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 13:26:01.743079 927252 cri.go:89] found id: "25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9"
I0224 13:26:01.743259 927252 cri.go:89] found id: "f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:26:01.743281 927252 cri.go:89] found id: ""
I0224 13:26:01.743303 927252 logs.go:282] 2 containers: [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9 f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d]
I0224 13:26:01.743503 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.748515 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.754139 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 13:26:01.754288 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 13:26:01.807332 927252 cri.go:89] found id: "9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597"
I0224 13:26:01.807415 927252 cri.go:89] found id: "1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:26:01.807427 927252 cri.go:89] found id: ""
I0224 13:26:01.807435 927252 logs.go:282] 2 containers: [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597 1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7]
I0224 13:26:01.807516 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.811985 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.816520 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0224 13:26:01.816635 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0224 13:26:01.864404 927252 cri.go:89] found id: "061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419"
I0224 13:26:01.864429 927252 cri.go:89] found id: ""
I0224 13:26:01.864438 927252 logs.go:282] 1 containers: [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419]
I0224 13:26:01.864536 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.868937 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 13:26:01.869053 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 13:26:01.915179 927252 cri.go:89] found id: "3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d"
I0224 13:26:01.915203 927252 cri.go:89] found id: "6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992"
I0224 13:26:01.915209 927252 cri.go:89] found id: ""
I0224 13:26:01.915219 927252 logs.go:282] 2 containers: [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d 6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992]
I0224 13:26:01.915278 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.919153 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.922828 927252 logs.go:123] Gathering logs for kube-controller-manager [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9] ...
I0224 13:26:01.922853 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9"
I0224 13:26:02.000213 927252 logs.go:123] Gathering logs for kube-controller-manager [f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d] ...
I0224 13:26:02.000253 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:26:02.071302 927252 logs.go:123] Gathering logs for storage-provisioner [6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992] ...
I0224 13:26:02.071346 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992"
I0224 13:26:02.113257 927252 logs.go:123] Gathering logs for kube-apiserver [a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa] ...
I0224 13:26:02.113286 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:26:02.185372 927252 logs.go:123] Gathering logs for etcd [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf] ...
I0224 13:26:02.185407 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf"
I0224 13:26:02.230470 927252 logs.go:123] Gathering logs for etcd [f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8] ...
I0224 13:26:02.230502 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:26:02.275593 927252 logs.go:123] Gathering logs for container status ...
I0224 13:26:02.275623 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 13:26:02.331810 927252 logs.go:123] Gathering logs for kube-apiserver [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a] ...
I0224 13:26:02.331841 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a"
I0224 13:26:02.410814 927252 logs.go:123] Gathering logs for coredns [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c] ...
I0224 13:26:02.410849 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c"
I0224 13:26:02.460154 927252 logs.go:123] Gathering logs for kube-scheduler [46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73] ...
I0224 13:26:02.460181 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:26:02.503412 927252 logs.go:123] Gathering logs for storage-provisioner [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d] ...
I0224 13:26:02.503441 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d"
I0224 13:26:02.548329 927252 logs.go:123] Gathering logs for containerd ...
I0224 13:26:02.548358 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 13:26:02.609571 927252 logs.go:123] Gathering logs for dmesg ...
I0224 13:26:02.609615 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 13:26:02.626935 927252 logs.go:123] Gathering logs for coredns [9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5] ...
I0224 13:26:02.626964 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:26:02.672895 927252 logs.go:123] Gathering logs for kubernetes-dashboard [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419] ...
I0224 13:26:02.672924 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419"
I0224 13:26:02.711480 927252 logs.go:123] Gathering logs for kube-proxy [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01] ...
I0224 13:26:02.711510 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01"
I0224 13:26:02.750943 927252 logs.go:123] Gathering logs for kube-proxy [bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3] ...
I0224 13:26:02.750972 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:26:02.789084 927252 logs.go:123] Gathering logs for kindnet [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597] ...
I0224 13:26:02.789119 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597"
I0224 13:26:02.843974 927252 logs.go:123] Gathering logs for kindnet [1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7] ...
I0224 13:26:02.844003 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:26:02.887921 927252 logs.go:123] Gathering logs for kubelet ...
I0224 13:26:02.887952 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0224 13:26:02.947275 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:39 old-k8s-version-041199 kubelet[667]: E0224 13:20:39.112850 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.947507 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:39 old-k8s-version-041199 kubelet[667]: E0224 13:20:39.658396 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.950434 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:50 old-k8s-version-041199 kubelet[667]: E0224 13:20:50.259925 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.952740 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:01 old-k8s-version-041199 kubelet[667]: E0224 13:21:01.758905 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.953084 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:02 old-k8s-version-041199 kubelet[667]: E0224 13:21:02.770204 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.953639 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:04 old-k8s-version-041199 kubelet[667]: E0224 13:21:04.241853 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.954082 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:09 old-k8s-version-041199 kubelet[667]: E0224 13:21:09.799481 667 pod_workers.go:191] Error syncing pod 6a90578d-b6eb-41b6-8f00-06711366057b ("storage-provisioner_kube-system(6a90578d-b6eb-41b6-8f00-06711366057b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6a90578d-b6eb-41b6-8f00-06711366057b)"
W0224 13:26:02.954415 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:10 old-k8s-version-041199 kubelet[667]: E0224 13:21:10.834981 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.957258 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:16 old-k8s-version-041199 kubelet[667]: E0224 13:21:16.250896 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.957999 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:25 old-k8s-version-041199 kubelet[667]: E0224 13:21:25.847989 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.958327 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:30 old-k8s-version-041199 kubelet[667]: E0224 13:21:30.835028 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.958532 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:31 old-k8s-version-041199 kubelet[667]: E0224 13:21:31.242247 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.958720 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:43 old-k8s-version-041199 kubelet[667]: E0224 13:21:43.241958 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.959309 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:46 old-k8s-version-041199 kubelet[667]: E0224 13:21:46.925238 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.959637 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:50 old-k8s-version-041199 kubelet[667]: E0224 13:21:50.836033 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.959820 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:56 old-k8s-version-041199 kubelet[667]: E0224 13:21:56.241820 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.960155 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:03 old-k8s-version-041199 kubelet[667]: E0224 13:22:03.245426 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.962600 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:10 old-k8s-version-041199 kubelet[667]: E0224 13:22:10.250794 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.962927 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:15 old-k8s-version-041199 kubelet[667]: E0224 13:22:15.241830 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.963112 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:21 old-k8s-version-041199 kubelet[667]: E0224 13:22:21.243021 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.963440 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:26 old-k8s-version-041199 kubelet[667]: E0224 13:22:26.241985 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.963625 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:35 old-k8s-version-041199 kubelet[667]: E0224 13:22:35.242040 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.964213 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:41 old-k8s-version-041199 kubelet[667]: E0224 13:22:41.077125 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.964398 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:50 old-k8s-version-041199 kubelet[667]: E0224 13:22:50.241885 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.964726 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:50 old-k8s-version-041199 kubelet[667]: E0224 13:22:50.835068 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.964911 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:01 old-k8s-version-041199 kubelet[667]: E0224 13:23:01.245324 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.965236 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:02 old-k8s-version-041199 kubelet[667]: E0224 13:23:02.241145 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.965564 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:14 old-k8s-version-041199 kubelet[667]: E0224 13:23:14.241413 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.965756 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:15 old-k8s-version-041199 kubelet[667]: E0224 13:23:15.242710 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.966085 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:28 old-k8s-version-041199 kubelet[667]: E0224 13:23:28.243135 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.966267 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:30 old-k8s-version-041199 kubelet[667]: E0224 13:23:30.241731 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.966615 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:40 old-k8s-version-041199 kubelet[667]: E0224 13:23:40.241083 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.969042 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:43 old-k8s-version-041199 kubelet[667]: E0224 13:23:43.253359 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.969409 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:54 old-k8s-version-041199 kubelet[667]: E0224 13:23:54.246604 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.969644 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:58 old-k8s-version-041199 kubelet[667]: E0224 13:23:58.241520 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.969961 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:09 old-k8s-version-041199 kubelet[667]: E0224 13:24:09.241847 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.970419 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:09 old-k8s-version-041199 kubelet[667]: E0224 13:24:09.331557 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.970746 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:10 old-k8s-version-041199 kubelet[667]: E0224 13:24:10.835264 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.970928 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:23 old-k8s-version-041199 kubelet[667]: E0224 13:24:23.242791 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.971253 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:25 old-k8s-version-041199 kubelet[667]: E0224 13:24:25.241309 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.971438 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:35 old-k8s-version-041199 kubelet[667]: E0224 13:24:35.243051 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.971770 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:40 old-k8s-version-041199 kubelet[667]: E0224 13:24:40.241570 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.971960 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:49 old-k8s-version-041199 kubelet[667]: E0224 13:24:49.241522 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.972284 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:54 old-k8s-version-041199 kubelet[667]: E0224 13:24:54.241230 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.972468 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:00 old-k8s-version-041199 kubelet[667]: E0224 13:25:00.248099 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.972793 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:05 old-k8s-version-041199 kubelet[667]: E0224 13:25:05.247533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.972976 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:15 old-k8s-version-041199 kubelet[667]: E0224 13:25:15.242757 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.973304 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:18 old-k8s-version-041199 kubelet[667]: E0224 13:25:18.241668 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.973487 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:29 old-k8s-version-041199 kubelet[667]: E0224 13:25:29.245663 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.973821 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: E0224 13:25:32.241514 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.974147 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.241131 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.974331 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.974656 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.974839 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0224 13:26:02.974850 927252 logs.go:123] Gathering logs for describe nodes ...
I0224 13:26:02.974865 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 13:26:03.140335 927252 logs.go:123] Gathering logs for kube-scheduler [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd] ...
I0224 13:26:03.140365 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd"
I0224 13:26:03.182100 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:26:03.182131 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0224 13:26:03.182203 927252 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0224 13:26:03.182220 927252 out.go:270] Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: E0224 13:25:32.241514 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: E0224 13:25:32.241514 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:03.182231 927252 out.go:270] Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.241131 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.241131 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:03.182239 927252 out.go:270] Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:03.182245 927252 out.go:270] Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:03.182251 927252 out.go:270] Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0224 13:26:03.182442 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:26:03.182459 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 13:26:13.182775 927252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 13:26:13.195315 927252 api_server.go:72] duration metric: took 5m58.004084642s to wait for apiserver process to appear ...
I0224 13:26:13.195342 927252 api_server.go:88] waiting for apiserver healthz status ...
I0224 13:26:13.195379 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 13:26:13.195438 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 13:26:13.236507 927252 cri.go:89] found id: "43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a"
I0224 13:26:13.236529 927252 cri.go:89] found id: "a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:26:13.236534 927252 cri.go:89] found id: ""
I0224 13:26:13.236542 927252 logs.go:282] 2 containers: [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa]
I0224 13:26:13.236606 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.240470 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.245362 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 13:26:13.245433 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 13:26:13.286760 927252 cri.go:89] found id: "ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf"
I0224 13:26:13.286786 927252 cri.go:89] found id: "f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:26:13.286790 927252 cri.go:89] found id: ""
I0224 13:26:13.286798 927252 logs.go:282] 2 containers: [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8]
I0224 13:26:13.286857 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.291304 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.295148 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 13:26:13.295220 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 13:26:13.340080 927252 cri.go:89] found id: "911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c"
I0224 13:26:13.340103 927252 cri.go:89] found id: "9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:26:13.340108 927252 cri.go:89] found id: ""
I0224 13:26:13.340116 927252 logs.go:282] 2 containers: [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c 9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5]
I0224 13:26:13.340176 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.344114 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.347453 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 13:26:13.347528 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 13:26:13.387402 927252 cri.go:89] found id: "e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd"
I0224 13:26:13.387426 927252 cri.go:89] found id: "46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:26:13.387431 927252 cri.go:89] found id: ""
I0224 13:26:13.387440 927252 logs.go:282] 2 containers: [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd 46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73]
I0224 13:26:13.387498 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.391191 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.394743 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 13:26:13.394847 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 13:26:13.442658 927252 cri.go:89] found id: "d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01"
I0224 13:26:13.442681 927252 cri.go:89] found id: "bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:26:13.442686 927252 cri.go:89] found id: ""
I0224 13:26:13.442694 927252 logs.go:282] 2 containers: [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01 bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3]
I0224 13:26:13.442749 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.446724 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.451089 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 13:26:13.451161 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 13:26:13.494590 927252 cri.go:89] found id: "25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9"
I0224 13:26:13.494659 927252 cri.go:89] found id: "f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:26:13.494677 927252 cri.go:89] found id: ""
I0224 13:26:13.494697 927252 logs.go:282] 2 containers: [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9 f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d]
I0224 13:26:13.494786 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.498342 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.501762 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 13:26:13.501849 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 13:26:13.556809 927252 cri.go:89] found id: "9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597"
I0224 13:26:13.556832 927252 cri.go:89] found id: "1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:26:13.556838 927252 cri.go:89] found id: ""
I0224 13:26:13.556845 927252 logs.go:282] 2 containers: [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597 1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7]
I0224 13:26:13.556929 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.560948 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.564523 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0224 13:26:13.564600 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0224 13:26:13.619145 927252 cri.go:89] found id: "061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419"
I0224 13:26:13.619169 927252 cri.go:89] found id: ""
I0224 13:26:13.619177 927252 logs.go:282] 1 containers: [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419]
I0224 13:26:13.619250 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.622662 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 13:26:13.622754 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 13:26:13.667120 927252 cri.go:89] found id: "3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d"
I0224 13:26:13.667144 927252 cri.go:89] found id: "6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992"
I0224 13:26:13.667149 927252 cri.go:89] found id: ""
I0224 13:26:13.667156 927252 logs.go:282] 2 containers: [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d 6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992]
I0224 13:26:13.667221 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.670885 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.674330 927252 logs.go:123] Gathering logs for kube-apiserver [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a] ...
I0224 13:26:13.674357 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a"
I0224 13:26:13.728017 927252 logs.go:123] Gathering logs for kube-proxy [bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3] ...
I0224 13:26:13.728053 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:26:13.771893 927252 logs.go:123] Gathering logs for kube-controller-manager [f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d] ...
I0224 13:26:13.771923 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:26:13.842183 927252 logs.go:123] Gathering logs for kindnet [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597] ...
I0224 13:26:13.842220 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597"
I0224 13:26:13.884645 927252 logs.go:123] Gathering logs for kindnet [1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7] ...
I0224 13:26:13.884674 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:26:13.932797 927252 logs.go:123] Gathering logs for dmesg ...
I0224 13:26:13.932824 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 13:26:13.951072 927252 logs.go:123] Gathering logs for kube-apiserver [a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa] ...
I0224 13:26:13.951104 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:26:14.025649 927252 logs.go:123] Gathering logs for etcd [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf] ...
I0224 13:26:14.025696 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf"
I0224 13:26:14.069785 927252 logs.go:123] Gathering logs for coredns [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c] ...
I0224 13:26:14.069816 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c"
I0224 13:26:14.109379 927252 logs.go:123] Gathering logs for storage-provisioner [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d] ...
I0224 13:26:14.109417 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d"
I0224 13:26:14.156027 927252 logs.go:123] Gathering logs for storage-provisioner [6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992] ...
I0224 13:26:14.156063 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992"
I0224 13:26:14.214626 927252 logs.go:123] Gathering logs for kubelet ...
I0224 13:26:14.214661 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0224 13:26:14.278321 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:39 old-k8s-version-041199 kubelet[667]: E0224 13:20:39.112850 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.278526 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:39 old-k8s-version-041199 kubelet[667]: E0224 13:20:39.658396 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.281345 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:50 old-k8s-version-041199 kubelet[667]: E0224 13:20:50.259925 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.283488 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:01 old-k8s-version-041199 kubelet[667]: E0224 13:21:01.758905 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.283821 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:02 old-k8s-version-041199 kubelet[667]: E0224 13:21:02.770204 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.284495 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:04 old-k8s-version-041199 kubelet[667]: E0224 13:21:04.241853 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.284943 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:09 old-k8s-version-041199 kubelet[667]: E0224 13:21:09.799481 667 pod_workers.go:191] Error syncing pod 6a90578d-b6eb-41b6-8f00-06711366057b ("storage-provisioner_kube-system(6a90578d-b6eb-41b6-8f00-06711366057b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6a90578d-b6eb-41b6-8f00-06711366057b)"
W0224 13:26:14.285309 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:10 old-k8s-version-041199 kubelet[667]: E0224 13:21:10.834981 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.288170 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:16 old-k8s-version-041199 kubelet[667]: E0224 13:21:16.250896 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.288926 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:25 old-k8s-version-041199 kubelet[667]: E0224 13:21:25.847989 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.289260 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:30 old-k8s-version-041199 kubelet[667]: E0224 13:21:30.835028 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.289447 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:31 old-k8s-version-041199 kubelet[667]: E0224 13:21:31.242247 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.289645 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:43 old-k8s-version-041199 kubelet[667]: E0224 13:21:43.241958 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.290826 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:46 old-k8s-version-041199 kubelet[667]: E0224 13:21:46.925238 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.291169 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:50 old-k8s-version-041199 kubelet[667]: E0224 13:21:50.836033 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.291376 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:56 old-k8s-version-041199 kubelet[667]: E0224 13:21:56.241820 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.291712 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:03 old-k8s-version-041199 kubelet[667]: E0224 13:22:03.245426 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.294286 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:10 old-k8s-version-041199 kubelet[667]: E0224 13:22:10.250794 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.294623 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:15 old-k8s-version-041199 kubelet[667]: E0224 13:22:15.241830 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.294808 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:21 old-k8s-version-041199 kubelet[667]: E0224 13:22:21.243021 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.295137 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:26 old-k8s-version-041199 kubelet[667]: E0224 13:22:26.241985 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.295330 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:35 old-k8s-version-041199 kubelet[667]: E0224 13:22:35.242040 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.295941 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:41 old-k8s-version-041199 kubelet[667]: E0224 13:22:41.077125 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.296126 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:50 old-k8s-version-041199 kubelet[667]: E0224 13:22:50.241885 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.296452 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:50 old-k8s-version-041199 kubelet[667]: E0224 13:22:50.835068 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.296639 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:01 old-k8s-version-041199 kubelet[667]: E0224 13:23:01.245324 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.296964 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:02 old-k8s-version-041199 kubelet[667]: E0224 13:23:02.241145 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.297289 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:14 old-k8s-version-041199 kubelet[667]: E0224 13:23:14.241413 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.297475 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:15 old-k8s-version-041199 kubelet[667]: E0224 13:23:15.242710 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.297815 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:28 old-k8s-version-041199 kubelet[667]: E0224 13:23:28.243135 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.298002 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:30 old-k8s-version-041199 kubelet[667]: E0224 13:23:30.241731 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.298329 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:40 old-k8s-version-041199 kubelet[667]: E0224 13:23:40.241083 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.300780 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:43 old-k8s-version-041199 kubelet[667]: E0224 13:23:43.253359 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.301107 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:54 old-k8s-version-041199 kubelet[667]: E0224 13:23:54.246604 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.301290 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:58 old-k8s-version-041199 kubelet[667]: E0224 13:23:58.241520 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.301638 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:09 old-k8s-version-041199 kubelet[667]: E0224 13:24:09.241847 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.302143 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:09 old-k8s-version-041199 kubelet[667]: E0224 13:24:09.331557 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.302478 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:10 old-k8s-version-041199 kubelet[667]: E0224 13:24:10.835264 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.302663 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:23 old-k8s-version-041199 kubelet[667]: E0224 13:24:23.242791 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.303002 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:25 old-k8s-version-041199 kubelet[667]: E0224 13:24:25.241309 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.303187 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:35 old-k8s-version-041199 kubelet[667]: E0224 13:24:35.243051 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.303513 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:40 old-k8s-version-041199 kubelet[667]: E0224 13:24:40.241570 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.303698 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:49 old-k8s-version-041199 kubelet[667]: E0224 13:24:49.241522 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.304028 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:54 old-k8s-version-041199 kubelet[667]: E0224 13:24:54.241230 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.304212 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:00 old-k8s-version-041199 kubelet[667]: E0224 13:25:00.248099 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.304537 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:05 old-k8s-version-041199 kubelet[667]: E0224 13:25:05.247533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.304723 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:15 old-k8s-version-041199 kubelet[667]: E0224 13:25:15.242757 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.305053 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:18 old-k8s-version-041199 kubelet[667]: E0224 13:25:18.241668 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.305238 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:29 old-k8s-version-041199 kubelet[667]: E0224 13:25:29.245663 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.305563 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: E0224 13:25:32.241514 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.305908 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.241131 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.306094 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.306931 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.307134 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.307517 927252 logs.go:138] Found kubelet problem: Feb 24 13:26:07 old-k8s-version-041199 kubelet[667]: E0224 13:26:07.249713 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.307722 927252 logs.go:138] Found kubelet problem: Feb 24 13:26:11 old-k8s-version-041199 kubelet[667]: E0224 13:26:11.241765 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0224 13:26:14.307736 927252 logs.go:123] Gathering logs for etcd [f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8] ...
I0224 13:26:14.307750 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:26:14.369693 927252 logs.go:123] Gathering logs for coredns [9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5] ...
I0224 13:26:14.369730 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:26:14.412096 927252 logs.go:123] Gathering logs for kube-proxy [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01] ...
I0224 13:26:14.412127 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01"
I0224 13:26:14.467225 927252 logs.go:123] Gathering logs for kubernetes-dashboard [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419] ...
I0224 13:26:14.467253 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419"
I0224 13:26:14.513827 927252 logs.go:123] Gathering logs for containerd ...
I0224 13:26:14.513855 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 13:26:14.578597 927252 logs.go:123] Gathering logs for container status ...
I0224 13:26:14.578639 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 13:26:14.626162 927252 logs.go:123] Gathering logs for describe nodes ...
I0224 13:26:14.626194 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 13:26:14.767473 927252 logs.go:123] Gathering logs for kube-scheduler [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd] ...
I0224 13:26:14.767505 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd"
I0224 13:26:14.815377 927252 logs.go:123] Gathering logs for kube-scheduler [46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73] ...
I0224 13:26:14.815410 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:26:14.860179 927252 logs.go:123] Gathering logs for kube-controller-manager [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9] ...
I0224 13:26:14.860224 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9"
I0224 13:26:14.929779 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:26:14.929809 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0224 13:26:14.929865 927252 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0224 13:26:14.929880 927252 out.go:270] Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.929888 927252 out.go:270] Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.929897 927252 out.go:270] Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.929911 927252 out.go:270] Feb 24 13:26:07 old-k8s-version-041199 kubelet[667]: E0224 13:26:07.249713 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:26:07 old-k8s-version-041199 kubelet[667]: E0224 13:26:07.249713 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.929919 927252 out.go:270] Feb 24 13:26:11 old-k8s-version-041199 kubelet[667]: E0224 13:26:11.241765 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:26:11 old-k8s-version-041199 kubelet[667]: E0224 13:26:11.241765 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0224 13:26:14.929928 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:26:14.929934 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 13:26:24.931888 927252 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0224 13:26:24.944372 927252 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0224 13:26:24.952204 927252 out.go:201]
W0224 13:26:24.958023 927252 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0224 13:26:24.958235 927252 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0224 13:26:24.958300 927252 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0224 13:26:24.958333 927252 out.go:270] *
*
W0224 13:26:24.963537 927252 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0224 13:26:24.967780 927252 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-041199 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-041199
helpers_test.go:235: (dbg) docker inspect old-k8s-version-041199:
-- stdout --
[
{
"Id": "bc4ba89b76534d52e0853cf92b60a6a725a52f830b891d1b142818aff3870b7f",
"Created": "2025-02-24T13:17:16.652322974Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 927383,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-02-24T13:20:07.301907839Z",
"FinishedAt": "2025-02-24T13:20:06.191374871Z"
},
"Image": "sha256:97f64c6c1710fa51774ed1bcabfea9e0981a3c815376cca47782248110390c98",
"ResolvConfPath": "/var/lib/docker/containers/bc4ba89b76534d52e0853cf92b60a6a725a52f830b891d1b142818aff3870b7f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/bc4ba89b76534d52e0853cf92b60a6a725a52f830b891d1b142818aff3870b7f/hostname",
"HostsPath": "/var/lib/docker/containers/bc4ba89b76534d52e0853cf92b60a6a725a52f830b891d1b142818aff3870b7f/hosts",
"LogPath": "/var/lib/docker/containers/bc4ba89b76534d52e0853cf92b60a6a725a52f830b891d1b142818aff3870b7f/bc4ba89b76534d52e0853cf92b60a6a725a52f830b891d1b142818aff3870b7f-json.log",
"Name": "/old-k8s-version-041199",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"old-k8s-version-041199:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-041199",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "bc4ba89b76534d52e0853cf92b60a6a725a52f830b891d1b142818aff3870b7f",
"LowerDir": "/var/lib/docker/overlay2/d444dea25a1204e1fa5c458bc31bae99aad7807e65485cc6e169bd9753e33782-init/diff:/var/lib/docker/overlay2/9f8b318d4cf1eba57eb21802142b1ff2628d906f10ce2d9556ce721ffeb50418/diff",
"MergedDir": "/var/lib/docker/overlay2/d444dea25a1204e1fa5c458bc31bae99aad7807e65485cc6e169bd9753e33782/merged",
"UpperDir": "/var/lib/docker/overlay2/d444dea25a1204e1fa5c458bc31bae99aad7807e65485cc6e169bd9753e33782/diff",
"WorkDir": "/var/lib/docker/overlay2/d444dea25a1204e1fa5c458bc31bae99aad7807e65485cc6e169bd9753e33782/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "old-k8s-version-041199",
"Source": "/var/lib/docker/volumes/old-k8s-version-041199/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "old-k8s-version-041199",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-041199",
"name.minikube.sigs.k8s.io": "old-k8s-version-041199",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "61b956ff8f2017e8b31df2fc1ab05443ecf2a5edd6cb497741141eda9fc537dc",
"SandboxKey": "/var/run/docker/netns/61b956ff8f20",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33824"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33825"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33828"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33826"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33827"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-041199": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "76:67:0d:ae:ac:d7",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "b5d18ca4896a15da0c141c52f205f041be68afa45116db49cd1b0c7fd34fda26",
"EndpointID": "cc4b5b66c5250c6502ac353a3347f97db467bf32463524e8c51b13dffa3296b7",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-041199",
"bc4ba89b7653"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-041199 -n old-k8s-version-041199
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-041199 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-041199 logs -n 25: (2.679948148s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-405144 sudo crio | cilium-405144 | jenkins | v1.35.0 | 24 Feb 25 13:16 UTC | |
| | config | | | | | |
| delete | -p cilium-405144 | cilium-405144 | jenkins | v1.35.0 | 24 Feb 25 13:16 UTC | 24 Feb 25 13:16 UTC |
| start | -p cert-expiration-141289 | cert-expiration-141289 | jenkins | v1.35.0 | 24 Feb 25 13:16 UTC | 24 Feb 25 13:16 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-129887 | force-systemd-env-129887 | jenkins | v1.35.0 | 24 Feb 25 13:16 UTC | 24 Feb 25 13:16 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-129887 | force-systemd-env-129887 | jenkins | v1.35.0 | 24 Feb 25 13:16 UTC | 24 Feb 25 13:16 UTC |
| start | -p cert-options-977486 | cert-options-977486 | jenkins | v1.35.0 | 24 Feb 25 13:16 UTC | 24 Feb 25 13:17 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-977486 ssh | cert-options-977486 | jenkins | v1.35.0 | 24 Feb 25 13:17 UTC | 24 Feb 25 13:17 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-977486 -- sudo | cert-options-977486 | jenkins | v1.35.0 | 24 Feb 25 13:17 UTC | 24 Feb 25 13:17 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-977486 | cert-options-977486 | jenkins | v1.35.0 | 24 Feb 25 13:17 UTC | 24 Feb 25 13:17 UTC |
| start | -p old-k8s-version-041199 | old-k8s-version-041199 | jenkins | v1.35.0 | 24 Feb 25 13:17 UTC | 24 Feb 25 13:19 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-141289 | cert-expiration-141289 | jenkins | v1.35.0 | 24 Feb 25 13:19 UTC | 24 Feb 25 13:19 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-141289 | cert-expiration-141289 | jenkins | v1.35.0 | 24 Feb 25 13:19 UTC | 24 Feb 25 13:19 UTC |
| addons | enable metrics-server -p old-k8s-version-041199 | old-k8s-version-041199 | jenkins | v1.35.0 | 24 Feb 25 13:19 UTC | 24 Feb 25 13:19 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| start | -p no-preload-037941 | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:19 UTC | 24 Feb 25 13:21 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| stop | -p old-k8s-version-041199 | old-k8s-version-041199 | jenkins | v1.35.0 | 24 Feb 25 13:19 UTC | 24 Feb 25 13:20 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-041199 | old-k8s-version-041199 | jenkins | v1.35.0 | 24 Feb 25 13:20 UTC | 24 Feb 25 13:20 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-041199 | old-k8s-version-041199 | jenkins | v1.35.0 | 24 Feb 25 13:20 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-037941 | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:21 UTC | 24 Feb 25 13:21 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-037941 | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:21 UTC | 24 Feb 25 13:21 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-037941 | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:21 UTC | 24 Feb 25 13:21 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-037941 | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:21 UTC | 24 Feb 25 13:26 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| image | no-preload-037941 image list | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:26 UTC | 24 Feb 25 13:26 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-037941 | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:26 UTC | 24 Feb 25 13:26 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-037941 | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:26 UTC | 24 Feb 25 13:26 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-037941 | no-preload-037941 | jenkins | v1.35.0 | 24 Feb 25 13:26 UTC | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/02/24 13:21:33
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.23.4 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0224 13:21:33.357282 932742 out.go:345] Setting OutFile to fd 1 ...
I0224 13:21:33.357530 932742 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 13:21:33.357539 932742 out.go:358] Setting ErrFile to fd 2...
I0224 13:21:33.357546 932742 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 13:21:33.357941 932742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-713351/.minikube/bin
I0224 13:21:33.358445 932742 out.go:352] Setting JSON to false
I0224 13:21:33.359690 932742 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14642,"bootTime":1740388652,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0224 13:21:33.359768 932742 start.go:139] virtualization:
I0224 13:21:33.362963 932742 out.go:177] * [no-preload-037941] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0224 13:21:33.366968 932742 out.go:177] - MINIKUBE_LOCATION=20451
I0224 13:21:33.367339 932742 notify.go:220] Checking for updates...
I0224 13:21:33.374141 932742 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0224 13:21:33.377223 932742 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20451-713351/kubeconfig
I0224 13:21:33.380162 932742 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-713351/.minikube
I0224 13:21:33.383241 932742 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0224 13:21:33.386134 932742 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0224 13:21:33.389668 932742 config.go:182] Loaded profile config "no-preload-037941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0224 13:21:33.390249 932742 driver.go:394] Setting default libvirt URI to qemu:///system
I0224 13:21:33.423800 932742 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
I0224 13:21:33.423931 932742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0224 13:21:33.484368 932742 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-02-24 13:21:33.474750098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
I0224 13:21:33.484526 932742 docker.go:318] overlay module found
I0224 13:21:33.487809 932742 out.go:177] * Using the docker driver based on existing profile
I0224 13:21:33.490719 932742 start.go:297] selected driver: docker
I0224 13:21:33.490741 932742 start.go:901] validating driver "docker" against &{Name:no-preload-037941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-037941 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0224 13:21:33.490928 932742 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0224 13:21:33.491728 932742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0224 13:21:33.547631 932742 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-02-24 13:21:33.539053616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
I0224 13:21:33.548013 932742 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0224 13:21:33.548040 932742 cni.go:84] Creating CNI manager for ""
I0224 13:21:33.548087 932742 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0224 13:21:33.548140 932742 start.go:340] cluster config:
{Name:no-preload-037941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-037941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0224 13:21:33.551338 932742 out.go:177] * Starting "no-preload-037941" primary control-plane node in "no-preload-037941" cluster
I0224 13:21:33.554072 932742 cache.go:121] Beginning downloading kic base image for docker with containerd
I0224 13:21:33.556999 932742 out.go:177] * Pulling base image v0.0.46-1740046583-20436 ...
I0224 13:21:33.559832 932742 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local docker daemon
I0224 13:21:33.559794 932742 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0224 13:21:33.560077 932742 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/config.json ...
I0224 13:21:33.560410 932742 cache.go:107] acquiring lock: {Name:mk5e04c806cdeee81071479329ce8193d76802ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.560487 932742 cache.go:115] /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0224 13:21:33.560496 932742 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.698µs
I0224 13:21:33.560504 932742 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0224 13:21:33.560515 932742 cache.go:107] acquiring lock: {Name:mk0fbb2bad69fa965c95e74d76b26989fef115c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.560545 932742 cache.go:115] /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.2 exists
I0224 13:21:33.560550 932742 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.2" took 36.741µs
I0224 13:21:33.560556 932742 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
I0224 13:21:33.560565 932742 cache.go:107] acquiring lock: {Name:mk15d256fee71247f003bc9bf7ea483f175bfbd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.560593 932742 cache.go:115] /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
I0224 13:21:33.560598 932742 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.2" took 34.182µs
I0224 13:21:33.560611 932742 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
I0224 13:21:33.560620 932742 cache.go:107] acquiring lock: {Name:mkd00405c9d3834fb4fc7ffeafa8e76443ba65fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.560646 932742 cache.go:115] /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.2 exists
I0224 13:21:33.560650 932742 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.2" took 31.827µs
I0224 13:21:33.560656 932742 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
I0224 13:21:33.560665 932742 cache.go:107] acquiring lock: {Name:mk304d3c6860318ccdc5de4a7b5fb115c8c02553 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.560691 932742 cache.go:115] /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.2 exists
I0224 13:21:33.560696 932742 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.2" took 32.45µs
I0224 13:21:33.560701 932742 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
I0224 13:21:33.560712 932742 cache.go:107] acquiring lock: {Name:mkacb188061103ef86c0af64d29fec9fbef009dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.560738 932742 cache.go:115] /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
I0224 13:21:33.560742 932742 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 31.868µs
I0224 13:21:33.560748 932742 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
I0224 13:21:33.560773 932742 cache.go:107] acquiring lock: {Name:mk73633f7716f89564329dfd33e98949a3ae986b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.560799 932742 cache.go:115] /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 exists
I0224 13:21:33.560804 932742 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0" took 35.068µs
I0224 13:21:33.560814 932742 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 succeeded
I0224 13:21:33.560823 932742 cache.go:107] acquiring lock: {Name:mkb105d634fe5c996ae4f7e4e204b33e5cdc4019 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.560848 932742 cache.go:115] /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I0224 13:21:33.560853 932742 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 30.883µs
I0224 13:21:33.560858 932742 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20451-713351/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I0224 13:21:33.560864 932742 cache.go:87] Successfully saved all images to host disk.
I0224 13:21:33.579657 932742 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local docker daemon, skipping pull
I0224 13:21:33.579679 932742 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 exists in daemon, skipping load
I0224 13:21:33.579693 932742 cache.go:230] Successfully downloaded all kic artifacts
I0224 13:21:33.579723 932742 start.go:360] acquireMachinesLock for no-preload-037941: {Name:mk6c3b4d6091860b0fc070bb871cde094947bd15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 13:21:33.579781 932742 start.go:364] duration metric: took 36.044µs to acquireMachinesLock for "no-preload-037941"
I0224 13:21:33.579805 932742 start.go:96] Skipping create...Using existing machine configuration
I0224 13:21:33.579810 932742 fix.go:54] fixHost starting:
I0224 13:21:33.580089 932742 cli_runner.go:164] Run: docker container inspect no-preload-037941 --format={{.State.Status}}
I0224 13:21:33.597229 932742 fix.go:112] recreateIfNeeded on no-preload-037941: state=Stopped err=<nil>
W0224 13:21:33.597260 932742 fix.go:138] unexpected machine state, will restart: <nil>
I0224 13:21:33.602375 932742 out.go:177] * Restarting existing docker container for "no-preload-037941" ...
I0224 13:21:32.335359 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:34.336416 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:36.354260 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:33.605327 932742 cli_runner.go:164] Run: docker start no-preload-037941
I0224 13:21:33.890259 932742 cli_runner.go:164] Run: docker container inspect no-preload-037941 --format={{.State.Status}}
I0224 13:21:33.912099 932742 kic.go:430] container "no-preload-037941" state is running.
I0224 13:21:33.912502 932742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-037941
I0224 13:21:33.939985 932742 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/config.json ...
I0224 13:21:33.940232 932742 machine.go:93] provisionDockerMachine start ...
I0224 13:21:33.940298 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:33.964153 932742 main.go:141] libmachine: Using SSH client type: native
I0224 13:21:33.964735 932742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 33829 <nil> <nil>}
I0224 13:21:33.964756 932742 main.go:141] libmachine: About to run SSH command:
hostname
I0224 13:21:33.965354 932742 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0224 13:21:37.100291 932742 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-037941
I0224 13:21:37.100317 932742 ubuntu.go:169] provisioning hostname "no-preload-037941"
I0224 13:21:37.100385 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:37.122223 932742 main.go:141] libmachine: Using SSH client type: native
I0224 13:21:37.122529 932742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 33829 <nil> <nil>}
I0224 13:21:37.122553 932742 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-037941 && echo "no-preload-037941" | sudo tee /etc/hostname
I0224 13:21:37.262985 932742 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-037941
I0224 13:21:37.263067 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:37.283390 932742 main.go:141] libmachine: Using SSH client type: native
I0224 13:21:37.283646 932742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 33829 <nil> <nil>}
I0224 13:21:37.283663 932742 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-037941' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-037941/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-037941' | sudo tee -a /etc/hosts;
fi
fi
I0224 13:21:37.422045 932742 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 13:21:37.422071 932742 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20451-713351/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-713351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-713351/.minikube}
I0224 13:21:37.422095 932742 ubuntu.go:177] setting up certificates
I0224 13:21:37.422105 932742 provision.go:84] configureAuth start
I0224 13:21:37.422164 932742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-037941
I0224 13:21:37.440248 932742 provision.go:143] copyHostCerts
I0224 13:21:37.440317 932742 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-713351/.minikube/cert.pem, removing ...
I0224 13:21:37.440333 932742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-713351/.minikube/cert.pem
I0224 13:21:37.440410 932742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-713351/.minikube/cert.pem (1123 bytes)
I0224 13:21:37.440523 932742 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-713351/.minikube/key.pem, removing ...
I0224 13:21:37.440540 932742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-713351/.minikube/key.pem
I0224 13:21:37.440572 932742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-713351/.minikube/key.pem (1679 bytes)
I0224 13:21:37.440636 932742 exec_runner.go:144] found /home/jenkins/minikube-integration/20451-713351/.minikube/ca.pem, removing ...
I0224 13:21:37.440645 932742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20451-713351/.minikube/ca.pem
I0224 13:21:37.440674 932742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-713351/.minikube/ca.pem (1082 bytes)
I0224 13:21:37.440726 932742 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-713351/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca-key.pem org=jenkins.no-preload-037941 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-037941]
I0224 13:21:38.255981 932742 provision.go:177] copyRemoteCerts
I0224 13:21:38.256063 932742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 13:21:38.256111 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:38.273485 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:38.367492 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0224 13:21:38.401936 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0224 13:21:38.430287 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0224 13:21:38.456862 932742 provision.go:87] duration metric: took 1.034734338s to configureAuth
I0224 13:21:38.456933 932742 ubuntu.go:193] setting minikube options for container-runtime
I0224 13:21:38.457178 932742 config.go:182] Loaded profile config "no-preload-037941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0224 13:21:38.457192 932742 machine.go:96] duration metric: took 4.516951804s to provisionDockerMachine
I0224 13:21:38.457201 932742 start.go:293] postStartSetup for "no-preload-037941" (driver="docker")
I0224 13:21:38.457212 932742 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 13:21:38.457275 932742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 13:21:38.457319 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:38.475199 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:38.567965 932742 ssh_runner.go:195] Run: cat /etc/os-release
I0224 13:21:38.571416 932742 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0224 13:21:38.571451 932742 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0224 13:21:38.571461 932742 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0224 13:21:38.571468 932742 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0224 13:21:38.571478 932742 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-713351/.minikube/addons for local assets ...
I0224 13:21:38.571536 932742 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-713351/.minikube/files for local assets ...
I0224 13:21:38.571618 932742 filesync.go:149] local asset: /home/jenkins/minikube-integration/20451-713351/.minikube/files/etc/ssl/certs/7187302.pem -> 7187302.pem in /etc/ssl/certs
I0224 13:21:38.571734 932742 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 13:21:38.580643 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/files/etc/ssl/certs/7187302.pem --> /etc/ssl/certs/7187302.pem (1708 bytes)
I0224 13:21:38.606147 932742 start.go:296] duration metric: took 148.929657ms for postStartSetup
I0224 13:21:38.606292 932742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0224 13:21:38.606390 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:38.623509 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:38.715253 932742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0224 13:21:38.720189 932742 fix.go:56] duration metric: took 5.140369569s for fixHost
I0224 13:21:38.720213 932742 start.go:83] releasing machines lock for "no-preload-037941", held for 5.140420005s
I0224 13:21:38.720293 932742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-037941
I0224 13:21:38.737474 932742 ssh_runner.go:195] Run: cat /version.json
I0224 13:21:38.737530 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:38.737866 932742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0224 13:21:38.737947 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:38.758521 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:38.759064 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:38.995771 932742 ssh_runner.go:195] Run: systemctl --version
I0224 13:21:39.002928 932742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0224 13:21:39.008562 932742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0224 13:21:39.029303 932742 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0224 13:21:39.029414 932742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0224 13:21:39.038990 932742 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0224 13:21:39.039016 932742 start.go:495] detecting cgroup driver to use...
I0224 13:21:39.039071 932742 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0224 13:21:39.039146 932742 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0224 13:21:39.053947 932742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 13:21:39.066638 932742 docker.go:217] disabling cri-docker service (if available) ...
I0224 13:21:39.066708 932742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0224 13:21:39.080301 932742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0224 13:21:39.092886 932742 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0224 13:21:39.180257 932742 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0224 13:21:39.273713 932742 docker.go:233] disabling docker service ...
I0224 13:21:39.273797 932742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0224 13:21:39.288006 932742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0224 13:21:39.299664 932742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0224 13:21:39.388961 932742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0224 13:21:39.480187 932742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0224 13:21:39.492753 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 13:21:39.510644 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0224 13:21:39.522112 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 13:21:39.532629 932742 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 13:21:39.532714 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 13:21:39.544084 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 13:21:39.554135 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 13:21:39.565753 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 13:21:39.580564 932742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 13:21:39.590239 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 13:21:39.600391 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0224 13:21:39.611061 932742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0224 13:21:39.621367 932742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 13:21:39.631293 932742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 13:21:39.640308 932742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 13:21:39.720232 932742 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 13:21:39.901991 932742 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0224 13:21:39.902065 932742 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0224 13:21:39.906155 932742 start.go:563] Will wait 60s for crictl version
I0224 13:21:39.906220 932742 ssh_runner.go:195] Run: which crictl
I0224 13:21:39.909700 932742 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0224 13:21:39.950509 932742 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0224 13:21:39.950583 932742 ssh_runner.go:195] Run: containerd --version
I0224 13:21:39.976476 932742 ssh_runner.go:195] Run: containerd --version
I0224 13:21:40.014404 932742 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
I0224 13:21:40.017720 932742 cli_runner.go:164] Run: docker network inspect no-preload-037941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0224 13:21:40.041530 932742 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0224 13:21:40.045999 932742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 13:21:40.059527 932742 kubeadm.go:883] updating cluster {Name:no-preload-037941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-037941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0224 13:21:40.059655 932742 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0224 13:21:40.059709 932742 ssh_runner.go:195] Run: sudo crictl images --output json
I0224 13:21:40.098095 932742 containerd.go:627] all images are preloaded for containerd runtime.
I0224 13:21:40.098126 932742 cache_images.go:84] Images are preloaded, skipping loading
I0224 13:21:40.098135 932742 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
I0224 13:21:40.098247 932742 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-037941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:no-preload-037941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0224 13:21:40.098323 932742 ssh_runner.go:195] Run: sudo crictl info
I0224 13:21:40.143783 932742 cni.go:84] Creating CNI manager for ""
I0224 13:21:40.143808 932742 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0224 13:21:40.143820 932742 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0224 13:21:40.143844 932742 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-037941 NodeName:no-preload-037941 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0224 13:21:40.143974 932742 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-037941"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0224 13:21:40.144056 932742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0224 13:21:40.154305 932742 binaries.go:44] Found k8s binaries, skipping transfer
I0224 13:21:40.154380 932742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0224 13:21:40.163462 932742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I0224 13:21:40.183720 932742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0224 13:21:40.202029 932742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
I0224 13:21:40.221114 932742 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0224 13:21:40.224879 932742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 13:21:40.235697 932742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 13:21:40.326589 932742 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0224 13:21:40.346892 932742 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941 for IP: 192.168.85.2
I0224 13:21:40.346913 932742 certs.go:194] generating shared ca certs ...
I0224 13:21:40.346930 932742 certs.go:226] acquiring lock for ca certs: {Name:mkc72ecc1d89fe0792bd08d20ea71860b678bc29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 13:21:40.347067 932742 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-713351/.minikube/ca.key
I0224 13:21:40.347118 932742 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-713351/.minikube/proxy-client-ca.key
I0224 13:21:40.347130 932742 certs.go:256] generating profile certs ...
I0224 13:21:40.347215 932742 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/client.key
I0224 13:21:40.347279 932742 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/apiserver.key.ec7ce747
I0224 13:21:40.347328 932742 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/proxy-client.key
I0224 13:21:40.347439 932742 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/718730.pem (1338 bytes)
W0224 13:21:40.347473 932742 certs.go:480] ignoring /home/jenkins/minikube-integration/20451-713351/.minikube/certs/718730_empty.pem, impossibly tiny 0 bytes
I0224 13:21:40.347485 932742 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca-key.pem (1675 bytes)
I0224 13:21:40.347509 932742 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/ca.pem (1082 bytes)
I0224 13:21:40.347535 932742 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/cert.pem (1123 bytes)
I0224 13:21:40.347561 932742 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/certs/key.pem (1679 bytes)
I0224 13:21:40.347632 932742 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-713351/.minikube/files/etc/ssl/certs/7187302.pem (1708 bytes)
I0224 13:21:40.348263 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0224 13:21:40.373646 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0224 13:21:40.399659 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0224 13:21:40.426431 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0224 13:21:40.470604 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0224 13:21:40.512179 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0224 13:21:40.548915 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0224 13:21:40.576896 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/profiles/no-preload-037941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0224 13:21:40.601388 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/files/etc/ssl/certs/7187302.pem --> /usr/share/ca-certificates/7187302.pem (1708 bytes)
I0224 13:21:40.628980 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0224 13:21:40.653882 932742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-713351/.minikube/certs/718730.pem --> /usr/share/ca-certificates/718730.pem (1338 bytes)
I0224 13:21:40.679806 932742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0224 13:21:40.698841 932742 ssh_runner.go:195] Run: openssl version
I0224 13:21:40.708326 932742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7187302.pem && ln -fs /usr/share/ca-certificates/7187302.pem /etc/ssl/certs/7187302.pem"
I0224 13:21:40.718741 932742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7187302.pem
I0224 13:21:40.722343 932742 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 24 12:41 /usr/share/ca-certificates/7187302.pem
I0224 13:21:40.722418 932742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7187302.pem
I0224 13:21:40.730199 932742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7187302.pem /etc/ssl/certs/3ec20f2e.0"
I0224 13:21:40.739237 932742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0224 13:21:40.748761 932742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0224 13:21:40.752379 932742 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:34 /usr/share/ca-certificates/minikubeCA.pem
I0224 13:21:40.752483 932742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0224 13:21:40.759712 932742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0224 13:21:40.768766 932742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/718730.pem && ln -fs /usr/share/ca-certificates/718730.pem /etc/ssl/certs/718730.pem"
I0224 13:21:40.778162 932742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/718730.pem
I0224 13:21:40.781750 932742 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 24 12:41 /usr/share/ca-certificates/718730.pem
I0224 13:21:40.781867 932742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/718730.pem
I0224 13:21:40.789256 932742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/718730.pem /etc/ssl/certs/51391683.0"
I0224 13:21:40.798392 932742 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0224 13:21:40.802390 932742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0224 13:21:40.810107 932742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0224 13:21:40.817529 932742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0224 13:21:40.824660 932742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0224 13:21:40.832405 932742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0224 13:21:40.841038 932742 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0224 13:21:40.847959 932742 kubeadm.go:392] StartCluster: {Name:no-preload-037941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-037941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0224 13:21:40.848065 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0224 13:21:40.848136 932742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0224 13:21:40.898410 932742 cri.go:89] found id: "b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4"
I0224 13:21:40.898431 932742 cri.go:89] found id: "0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b"
I0224 13:21:40.898436 932742 cri.go:89] found id: "c17d7507f0c1b0ccbfde50b87d8156e83c2fdf66a07b794d15268f41e34a1d04"
I0224 13:21:40.898449 932742 cri.go:89] found id: "91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a"
I0224 13:21:40.898453 932742 cri.go:89] found id: "99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499"
I0224 13:21:40.898457 932742 cri.go:89] found id: "c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e"
I0224 13:21:40.898460 932742 cri.go:89] found id: "57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b"
I0224 13:21:40.898464 932742 cri.go:89] found id: "d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba"
I0224 13:21:40.898466 932742 cri.go:89] found id: ""
I0224 13:21:40.898522 932742 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0224 13:21:40.918332 932742 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-02-24T13:21:40Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0224 13:21:40.918409 932742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0224 13:21:40.935818 932742 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0224 13:21:40.935838 932742 kubeadm.go:593] restartPrimaryControlPlane start ...
I0224 13:21:40.935900 932742 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0224 13:21:40.955089 932742 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0224 13:21:40.955725 932742 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-037941" does not appear in /home/jenkins/minikube-integration/20451-713351/kubeconfig
I0224 13:21:40.956016 932742 kubeconfig.go:62] /home/jenkins/minikube-integration/20451-713351/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-037941" cluster setting kubeconfig missing "no-preload-037941" context setting]
I0224 13:21:40.956498 932742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-713351/kubeconfig: {Name:mk2d402ee8f3936e3ec334c56d05ef6059f3cb5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 13:21:40.957983 932742 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0224 13:21:40.983124 932742 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0224 13:21:40.983159 932742 kubeadm.go:597] duration metric: took 47.315787ms to restartPrimaryControlPlane
I0224 13:21:40.983169 932742 kubeadm.go:394] duration metric: took 135.219299ms to StartCluster
I0224 13:21:40.983185 932742 settings.go:142] acquiring lock: {Name:mk595fc9ff86cccbad8dd75071531f844958cc25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 13:21:40.983251 932742 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20451-713351/kubeconfig
I0224 13:21:40.984277 932742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-713351/kubeconfig: {Name:mk2d402ee8f3936e3ec334c56d05ef6059f3cb5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 13:21:40.984493 932742 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0224 13:21:40.984870 932742 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0224 13:21:40.984943 932742 addons.go:69] Setting storage-provisioner=true in profile "no-preload-037941"
I0224 13:21:40.984957 932742 addons.go:238] Setting addon storage-provisioner=true in "no-preload-037941"
W0224 13:21:40.984969 932742 addons.go:247] addon storage-provisioner should already be in state true
I0224 13:21:40.985001 932742 host.go:66] Checking if "no-preload-037941" exists ...
I0224 13:21:40.985487 932742 cli_runner.go:164] Run: docker container inspect no-preload-037941 --format={{.State.Status}}
I0224 13:21:40.985873 932742 config.go:182] Loaded profile config "no-preload-037941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0224 13:21:40.985937 932742 addons.go:69] Setting metrics-server=true in profile "no-preload-037941"
I0224 13:21:40.985951 932742 addons.go:238] Setting addon metrics-server=true in "no-preload-037941"
W0224 13:21:40.985957 932742 addons.go:247] addon metrics-server should already be in state true
I0224 13:21:40.985981 932742 host.go:66] Checking if "no-preload-037941" exists ...
I0224 13:21:40.986386 932742 cli_runner.go:164] Run: docker container inspect no-preload-037941 --format={{.State.Status}}
I0224 13:21:40.986547 932742 addons.go:69] Setting dashboard=true in profile "no-preload-037941"
I0224 13:21:40.986565 932742 addons.go:238] Setting addon dashboard=true in "no-preload-037941"
W0224 13:21:40.986572 932742 addons.go:247] addon dashboard should already be in state true
I0224 13:21:40.986592 932742 host.go:66] Checking if "no-preload-037941" exists ...
I0224 13:21:40.986992 932742 cli_runner.go:164] Run: docker container inspect no-preload-037941 --format={{.State.Status}}
I0224 13:21:40.993542 932742 addons.go:69] Setting default-storageclass=true in profile "no-preload-037941"
I0224 13:21:40.995376 932742 out.go:177] * Verifying Kubernetes components...
I0224 13:21:40.993579 932742 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-037941"
I0224 13:21:40.996715 932742 cli_runner.go:164] Run: docker container inspect no-preload-037941 --format={{.State.Status}}
I0224 13:21:41.000255 932742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 13:21:41.051668 932742 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0224 13:21:41.055062 932742 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0224 13:21:41.059803 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0224 13:21:41.059830 932742 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0224 13:21:41.059901 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:41.069081 932742 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0224 13:21:41.073895 932742 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0224 13:21:41.073924 932742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0224 13:21:41.073987 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:41.074710 932742 addons.go:238] Setting addon default-storageclass=true in "no-preload-037941"
W0224 13:21:41.074731 932742 addons.go:247] addon default-storageclass should already be in state true
I0224 13:21:41.074754 932742 host.go:66] Checking if "no-preload-037941" exists ...
I0224 13:21:41.075178 932742 cli_runner.go:164] Run: docker container inspect no-preload-037941 --format={{.State.Status}}
I0224 13:21:41.082301 932742 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0224 13:21:38.836592 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:40.837324 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:41.085221 932742 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0224 13:21:41.085252 932742 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0224 13:21:41.085319 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:41.148291 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:41.159186 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:41.164431 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:41.165514 932742 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0224 13:21:41.165533 932742 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0224 13:21:41.165604 932742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-037941
I0224 13:21:41.197746 932742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20451-713351/.minikube/machines/no-preload-037941/id_rsa Username:docker}
I0224 13:21:41.254643 932742 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0224 13:21:41.318843 932742 node_ready.go:35] waiting up to 6m0s for node "no-preload-037941" to be "Ready" ...
I0224 13:21:41.409191 932742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0224 13:21:41.463080 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0224 13:21:41.463170 932742 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0224 13:21:41.522356 932742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:21:41.556403 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0224 13:21:41.556491 932742 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0224 13:21:41.588494 932742 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0224 13:21:41.588594 932742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0224 13:21:41.647513 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0224 13:21:41.647599 932742 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0224 13:21:41.723599 932742 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0224 13:21:41.723690 932742 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
W0224 13:21:41.845128 932742 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0224 13:21:41.845232 932742 retry.go:31] will retry after 209.908374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0224 13:21:41.880460 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0224 13:21:41.880508 932742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0224 13:21:41.965801 932742 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0224 13:21:41.965887 932742 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0224 13:21:42.019551 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0224 13:21:42.019800 932742 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0224 13:21:42.055796 932742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0224 13:21:42.103354 932742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0224 13:21:42.154787 932742 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0224 13:21:42.154946 932742 retry.go:31] will retry after 330.848744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0224 13:21:42.213266 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0224 13:21:42.213370 932742 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0224 13:21:42.364326 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0224 13:21:42.364401 932742 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0224 13:21:42.486856 932742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0224 13:21:42.495217 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0224 13:21:42.495289 932742 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0224 13:21:42.691375 932742 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0224 13:21:42.691450 932742 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0224 13:21:42.816841 932742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0224 13:21:43.337000 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:45.837011 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:47.183344 932742 node_ready.go:49] node "no-preload-037941" has status "Ready":"True"
I0224 13:21:47.183372 932742 node_ready.go:38] duration metric: took 5.86443698s for node "no-preload-037941" to be "Ready" ...
I0224 13:21:47.183383 932742 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 13:21:47.199530 932742 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-x7p66" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.302116 932742 pod_ready.go:93] pod "coredns-668d6bf9bc-x7p66" in "kube-system" namespace has status "Ready":"True"
I0224 13:21:47.302195 932742 pod_ready.go:82] duration metric: took 102.634537ms for pod "coredns-668d6bf9bc-x7p66" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.302222 932742 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-037941" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.377793 932742 pod_ready.go:93] pod "etcd-no-preload-037941" in "kube-system" namespace has status "Ready":"True"
I0224 13:21:47.377866 932742 pod_ready.go:82] duration metric: took 75.62459ms for pod "etcd-no-preload-037941" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.377895 932742 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-037941" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.404958 932742 pod_ready.go:93] pod "kube-apiserver-no-preload-037941" in "kube-system" namespace has status "Ready":"True"
I0224 13:21:47.405043 932742 pod_ready.go:82] duration metric: took 27.126441ms for pod "kube-apiserver-no-preload-037941" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.405093 932742 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-037941" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.416240 932742 pod_ready.go:93] pod "kube-controller-manager-no-preload-037941" in "kube-system" namespace has status "Ready":"True"
I0224 13:21:47.416312 932742 pod_ready.go:82] duration metric: took 11.193386ms for pod "kube-controller-manager-no-preload-037941" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.416339 932742 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p6xtb" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.427637 932742 pod_ready.go:93] pod "kube-proxy-p6xtb" in "kube-system" namespace has status "Ready":"True"
I0224 13:21:47.427712 932742 pod_ready.go:82] duration metric: took 11.352562ms for pod "kube-proxy-p6xtb" in "kube-system" namespace to be "Ready" ...
I0224 13:21:47.427737 932742 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-037941" in "kube-system" namespace to be "Ready" ...
I0224 13:21:49.433032 932742 pod_ready.go:103] pod "kube-scheduler-no-preload-037941" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:50.491681 932742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.435773151s)
I0224 13:21:50.491809 932742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.38834572s)
I0224 13:21:50.491819 932742 addons.go:479] Verifying addon metrics-server=true in "no-preload-037941"
I0224 13:21:50.491851 932742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.004925709s)
I0224 13:21:50.704409 932742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.887474109s)
I0224 13:21:50.707514 932742 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-037941 addons enable metrics-server
I0224 13:21:50.711405 932742 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
I0224 13:21:47.837046 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:50.384479 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:50.714372 932742 addons.go:514] duration metric: took 9.729499434s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
I0224 13:21:51.433758 932742 pod_ready.go:103] pod "kube-scheduler-no-preload-037941" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:52.836991 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:55.335519 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:53.937719 932742 pod_ready.go:103] pod "kube-scheduler-no-preload-037941" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:54.934726 932742 pod_ready.go:93] pod "kube-scheduler-no-preload-037941" in "kube-system" namespace has status "Ready":"True"
I0224 13:21:54.934762 932742 pod_ready.go:82] duration metric: took 7.507001333s for pod "kube-scheduler-no-preload-037941" in "kube-system" namespace to be "Ready" ...
I0224 13:21:54.934781 932742 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace to be "Ready" ...
I0224 13:21:56.942827 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:57.336789 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:21:59.839156 927252 pod_ready.go:103] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:01.336376 927252 pod_ready.go:93] pod "etcd-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.336412 927252 pod_ready.go:82] duration metric: took 40.506040441s for pod "etcd-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.336431 927252 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.343183 927252 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.343208 927252 pod_ready.go:82] duration metric: took 6.768835ms for pod "kube-apiserver-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.343225 927252 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.349177 927252 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.349208 927252 pod_ready.go:82] duration metric: took 5.973237ms for pod "kube-controller-manager-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.349223 927252 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gxpjd" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.357462 927252 pod_ready.go:93] pod "kube-proxy-gxpjd" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.357506 927252 pod_ready.go:82] duration metric: took 8.273633ms for pod "kube-proxy-gxpjd" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.357522 927252 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.363894 927252 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-041199" in "kube-system" namespace has status "Ready":"True"
I0224 13:22:01.363928 927252 pod_ready.go:82] duration metric: took 6.387118ms for pod "kube-scheduler-old-k8s-version-041199" in "kube-system" namespace to be "Ready" ...
I0224 13:22:01.363947 927252 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace to be "Ready" ...
I0224 13:21:58.944472 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:01.442099 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:03.369160 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:05.869746 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:03.940595 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:05.941162 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:07.941756 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:08.370036 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:10.370090 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:10.440842 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:12.941195 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:12.869259 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:14.870040 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:15.440765 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:17.440849 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:17.368856 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:19.369667 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:21.869388 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:19.939869 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:21.940369 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:23.870282 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:25.870514 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:23.941154 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:25.941268 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:28.369257 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:30.369720 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:28.440240 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:30.441512 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:32.940559 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:32.869276 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:34.870645 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:34.950223 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:37.440383 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:37.369147 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:39.369527 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:41.370689 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:39.442889 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:41.942911 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:43.869480 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:45.869526 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:44.441920 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:46.940078 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:47.870772 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:50.369084 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:49.441556 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:51.441901 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:52.868934 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:54.869801 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:56.869906 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:53.940408 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:55.942199 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:59.368718 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:01.370114 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:22:58.440909 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:00.444844 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:02.940750 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:03.868583 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:05.869283 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:05.441525 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:07.939558 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:08.368952 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:10.369480 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:09.940773 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:12.440945 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:12.868948 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:14.869955 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:14.939774 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:16.940008 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:17.369024 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:19.869202 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:21.870114 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:18.941775 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:21.440394 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:24.368988 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:26.369648 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:23.440887 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:25.941994 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:28.870121 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:31.369158 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:28.440538 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:30.441420 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:32.939956 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:33.378119 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:35.869876 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:34.940983 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:37.441156 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:38.369849 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:40.870042 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:39.941914 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:42.441425 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:43.369324 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:45.870225 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:44.940193 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:46.941989 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:47.871060 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:50.369017 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:49.440568 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:51.441004 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:52.369152 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:54.372968 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:56.374098 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:53.941341 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:56.440688 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:58.868892 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:00.869797 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:23:58.442715 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:00.443809 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:02.940916 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:03.370080 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:05.869495 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:05.441652 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:07.940361 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:08.369948 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:10.869710 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:09.940416 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:11.940550 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:12.869743 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:15.368318 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:14.442035 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:16.940942 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:17.368770 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:19.869523 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:21.871108 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:19.440989 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:21.939962 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:24.369568 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:26.869203 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:23.940034 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:25.944192 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:29.368644 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:31.369383 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:28.448388 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:30.940536 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:33.870123 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:36.368830 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:33.441085 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:35.444279 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:37.940237 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:38.369278 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:40.869261 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:40.441016 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:42.940134 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:42.869419 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:44.870084 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:44.940588 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:46.940934 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:47.369221 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:49.371098 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:51.870677 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:49.441848 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:51.940225 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:53.872634 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:56.370227 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:53.940289 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:56.440947 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:58.870140 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:01.374446 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:24:58.940196 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:00.940472 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:02.941092 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:03.870621 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:06.370145 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:05.442264 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:07.443635 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:08.869312 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:10.869392 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:09.940609 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:12.440353 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:13.369247 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:15.369393 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:14.939906 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:16.940114 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:17.869827 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:19.870381 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:21.870670 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:18.940727 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:21.441777 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:24.368689 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:26.369755 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:23.940181 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:25.941449 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:28.425070 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:30.869733 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:28.440209 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:30.441081 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:32.941281 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:32.870199 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:34.870900 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:35.443642 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:37.939855 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:37.369060 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:39.369731 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:41.871019 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:39.940014 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:41.941310 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:43.872810 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:46.369337 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:44.441060 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:46.442836 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:48.369412 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:50.369572 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:48.940378 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:50.940673 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:52.942593 932742 pod_ready.go:103] pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:52.869501 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:54.870077 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:56.870133 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:25:54.941186 932742 pod_ready.go:82] duration metric: took 4m0.006392197s for pod "metrics-server-f79f97bbb-8d9mq" in "kube-system" namespace to be "Ready" ...
E0224 13:25:54.941212 932742 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0224 13:25:54.941221 932742 pod_ready.go:39] duration metric: took 4m7.757827528s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 13:25:54.941236 932742 api_server.go:52] waiting for apiserver process to appear ...
I0224 13:25:54.941279 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 13:25:54.941350 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 13:25:55.002622 932742 cri.go:89] found id: "70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661"
I0224 13:25:55.002648 932742 cri.go:89] found id: "c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e"
I0224 13:25:55.002654 932742 cri.go:89] found id: ""
I0224 13:25:55.002662 932742 logs.go:282] 2 containers: [70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661 c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e]
I0224 13:25:55.002735 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.007922 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.012989 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 13:25:55.013092 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 13:25:55.053900 932742 cri.go:89] found id: "3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d"
I0224 13:25:55.053923 932742 cri.go:89] found id: "99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499"
I0224 13:25:55.053934 932742 cri.go:89] found id: ""
I0224 13:25:55.053943 932742 logs.go:282] 2 containers: [3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d 99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499]
I0224 13:25:55.054000 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.058135 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.061899 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 13:25:55.061981 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 13:25:55.102198 932742 cri.go:89] found id: "a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b"
I0224 13:25:55.102218 932742 cri.go:89] found id: "b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4"
I0224 13:25:55.102223 932742 cri.go:89] found id: ""
I0224 13:25:55.102230 932742 logs.go:282] 2 containers: [a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4]
I0224 13:25:55.102291 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.106317 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.110875 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 13:25:55.110948 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 13:25:55.151043 932742 cri.go:89] found id: "b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a"
I0224 13:25:55.151132 932742 cri.go:89] found id: "d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba"
I0224 13:25:55.151150 932742 cri.go:89] found id: ""
I0224 13:25:55.151169 932742 logs.go:282] 2 containers: [b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba]
I0224 13:25:55.151285 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.155359 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.158997 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 13:25:55.159080 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 13:25:55.205519 932742 cri.go:89] found id: "30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1"
I0224 13:25:55.205558 932742 cri.go:89] found id: "91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a"
I0224 13:25:55.205564 932742 cri.go:89] found id: ""
I0224 13:25:55.205571 932742 logs.go:282] 2 containers: [30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1 91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a]
I0224 13:25:55.205669 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.210964 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.214629 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 13:25:55.214729 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 13:25:55.278865 932742 cri.go:89] found id: "f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4"
I0224 13:25:55.278887 932742 cri.go:89] found id: "57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b"
I0224 13:25:55.278892 932742 cri.go:89] found id: ""
I0224 13:25:55.278900 932742 logs.go:282] 2 containers: [f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4 57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b]
I0224 13:25:55.278969 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.283739 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.291430 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 13:25:55.291543 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 13:25:55.350682 932742 cri.go:89] found id: "fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb"
I0224 13:25:55.350705 932742 cri.go:89] found id: "0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b"
I0224 13:25:55.350711 932742 cri.go:89] found id: ""
I0224 13:25:55.350719 932742 logs.go:282] 2 containers: [fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb 0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b]
I0224 13:25:55.350776 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.354835 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.358941 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 13:25:55.359045 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 13:25:55.409006 932742 cri.go:89] found id: "ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79"
I0224 13:25:55.409030 932742 cri.go:89] found id: "1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5"
I0224 13:25:55.409035 932742 cri.go:89] found id: ""
I0224 13:25:55.409043 932742 logs.go:282] 2 containers: [ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79 1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5]
I0224 13:25:55.409121 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.413110 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.417430 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0224 13:25:55.417526 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0224 13:25:55.460002 932742 cri.go:89] found id: "a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b"
I0224 13:25:55.460025 932742 cri.go:89] found id: ""
I0224 13:25:55.460033 932742 logs.go:282] 1 containers: [a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b]
I0224 13:25:55.460124 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:55.463738 932742 logs.go:123] Gathering logs for storage-provisioner [1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5] ...
I0224 13:25:55.463766 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5"
I0224 13:25:55.511402 932742 logs.go:123] Gathering logs for container status ...
I0224 13:25:55.511432 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 13:25:55.599964 932742 logs.go:123] Gathering logs for kube-apiserver [70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661] ...
I0224 13:25:55.599996 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661"
I0224 13:25:55.665052 932742 logs.go:123] Gathering logs for coredns [b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4] ...
I0224 13:25:55.665087 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4"
I0224 13:25:55.707078 932742 logs.go:123] Gathering logs for kube-proxy [30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1] ...
I0224 13:25:55.707120 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1"
I0224 13:25:55.749825 932742 logs.go:123] Gathering logs for kube-proxy [91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a] ...
I0224 13:25:55.749855 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a"
I0224 13:25:55.792301 932742 logs.go:123] Gathering logs for kube-controller-manager [f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4] ...
I0224 13:25:55.792331 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4"
I0224 13:25:55.862208 932742 logs.go:123] Gathering logs for kindnet [0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b] ...
I0224 13:25:55.862243 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b"
I0224 13:25:55.907322 932742 logs.go:123] Gathering logs for containerd ...
I0224 13:25:55.907350 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 13:25:55.971758 932742 logs.go:123] Gathering logs for etcd [3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d] ...
I0224 13:25:55.971793 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d"
I0224 13:25:56.030517 932742 logs.go:123] Gathering logs for kube-scheduler [b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a] ...
I0224 13:25:56.030553 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a"
I0224 13:25:56.075194 932742 logs.go:123] Gathering logs for kube-apiserver [c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e] ...
I0224 13:25:56.075225 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e"
I0224 13:25:56.143072 932742 logs.go:123] Gathering logs for etcd [99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499] ...
I0224 13:25:56.143111 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499"
I0224 13:25:56.190354 932742 logs.go:123] Gathering logs for kube-controller-manager [57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b] ...
I0224 13:25:56.190385 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b"
I0224 13:25:56.253394 932742 logs.go:123] Gathering logs for storage-provisioner [ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79] ...
I0224 13:25:56.253434 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79"
I0224 13:25:56.291600 932742 logs.go:123] Gathering logs for kubelet ...
I0224 13:25:56.291629 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0224 13:25:56.381824 932742 logs.go:123] Gathering logs for dmesg ...
I0224 13:25:56.381862 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 13:25:56.401732 932742 logs.go:123] Gathering logs for kube-scheduler [d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba] ...
I0224 13:25:56.401763 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba"
I0224 13:25:56.472166 932742 logs.go:123] Gathering logs for kindnet [fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb] ...
I0224 13:25:56.472202 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb"
I0224 13:25:56.516729 932742 logs.go:123] Gathering logs for kubernetes-dashboard [a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b] ...
I0224 13:25:56.516756 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b"
I0224 13:25:56.562177 932742 logs.go:123] Gathering logs for describe nodes ...
I0224 13:25:56.562206 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 13:25:56.707648 932742 logs.go:123] Gathering logs for coredns [a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b] ...
I0224 13:25:56.707680 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b"
I0224 13:25:59.372678 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:26:01.374544 927252 pod_ready.go:103] pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace has status "Ready":"False"
I0224 13:26:01.374583 927252 pod_ready.go:82] duration metric: took 4m0.010627786s for pod "metrics-server-9975d5f86-4hkkq" in "kube-system" namespace to be "Ready" ...
E0224 13:26:01.374597 927252 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0224 13:26:01.374606 927252 pod_ready.go:39] duration metric: took 5m25.269530815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0224 13:26:01.374625 927252 api_server.go:52] waiting for apiserver process to appear ...
I0224 13:26:01.374679 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 13:26:01.374755 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 13:26:01.456344 927252 cri.go:89] found id: "43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a"
I0224 13:26:01.456369 927252 cri.go:89] found id: "a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:26:01.456374 927252 cri.go:89] found id: ""
I0224 13:26:01.456382 927252 logs.go:282] 2 containers: [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa]
I0224 13:26:01.456444 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.460731 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.464631 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 13:26:01.464708 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 13:26:01.515782 927252 cri.go:89] found id: "ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf"
I0224 13:26:01.515811 927252 cri.go:89] found id: "f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:26:01.515817 927252 cri.go:89] found id: ""
I0224 13:26:01.515826 927252 logs.go:282] 2 containers: [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8]
I0224 13:26:01.515906 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.520410 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.526491 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 13:26:01.526730 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 13:26:01.573251 927252 cri.go:89] found id: "911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c"
I0224 13:26:01.573276 927252 cri.go:89] found id: "9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:26:01.573281 927252 cri.go:89] found id: ""
I0224 13:26:01.573290 927252 logs.go:282] 2 containers: [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c 9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5]
I0224 13:26:01.573383 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.577833 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.581941 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 13:26:01.582076 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 13:26:01.631487 927252 cri.go:89] found id: "e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd"
I0224 13:26:01.631509 927252 cri.go:89] found id: "46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:26:01.631514 927252 cri.go:89] found id: ""
I0224 13:26:01.631522 927252 logs.go:282] 2 containers: [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd 46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73]
I0224 13:26:01.631612 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.635833 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.641445 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 13:26:01.641684 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 13:26:01.684531 927252 cri.go:89] found id: "d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01"
I0224 13:26:01.684578 927252 cri.go:89] found id: "bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:26:01.684586 927252 cri.go:89] found id: ""
I0224 13:26:01.684594 927252 logs.go:282] 2 containers: [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01 bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3]
I0224 13:26:01.684663 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.689913 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.695230 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 13:26:01.695367 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 13:26:01.743079 927252 cri.go:89] found id: "25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9"
I0224 13:26:01.743259 927252 cri.go:89] found id: "f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:26:01.743281 927252 cri.go:89] found id: ""
I0224 13:26:01.743303 927252 logs.go:282] 2 containers: [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9 f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d]
I0224 13:26:01.743503 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.748515 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.754139 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 13:26:01.754288 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 13:26:01.807332 927252 cri.go:89] found id: "9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597"
I0224 13:26:01.807415 927252 cri.go:89] found id: "1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:26:01.807427 927252 cri.go:89] found id: ""
I0224 13:26:01.807435 927252 logs.go:282] 2 containers: [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597 1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7]
I0224 13:26:01.807516 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.811985 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.816520 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0224 13:26:01.816635 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0224 13:26:01.864404 927252 cri.go:89] found id: "061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419"
I0224 13:26:01.864429 927252 cri.go:89] found id: ""
I0224 13:26:01.864438 927252 logs.go:282] 1 containers: [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419]
I0224 13:26:01.864536 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.868937 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 13:26:01.869053 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 13:26:01.915179 927252 cri.go:89] found id: "3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d"
I0224 13:26:01.915203 927252 cri.go:89] found id: "6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992"
I0224 13:26:01.915209 927252 cri.go:89] found id: ""
I0224 13:26:01.915219 927252 logs.go:282] 2 containers: [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d 6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992]
I0224 13:26:01.915278 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:01.919153 927252 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.250777 932742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 13:25:59.264944 932742 api_server.go:72] duration metric: took 4m18.280411703s to wait for apiserver process to appear ...
I0224 13:25:59.264966 932742 api_server.go:88] waiting for apiserver healthz status ...
I0224 13:25:59.265000 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 13:25:59.265052 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 13:25:59.307495 932742 cri.go:89] found id: "70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661"
I0224 13:25:59.307515 932742 cri.go:89] found id: "c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e"
I0224 13:25:59.307520 932742 cri.go:89] found id: ""
I0224 13:25:59.307527 932742 logs.go:282] 2 containers: [70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661 c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e]
I0224 13:25:59.307582 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.311823 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.315886 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 13:25:59.315954 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 13:25:59.357799 932742 cri.go:89] found id: "3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d"
I0224 13:25:59.357864 932742 cri.go:89] found id: "99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499"
I0224 13:25:59.357883 932742 cri.go:89] found id: ""
I0224 13:25:59.357912 932742 logs.go:282] 2 containers: [3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d 99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499]
I0224 13:25:59.357982 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.362025 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.366112 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 13:25:59.366233 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 13:25:59.407697 932742 cri.go:89] found id: "a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b"
I0224 13:25:59.407760 932742 cri.go:89] found id: "b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4"
I0224 13:25:59.407791 932742 cri.go:89] found id: ""
I0224 13:25:59.407863 932742 logs.go:282] 2 containers: [a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4]
I0224 13:25:59.407936 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.411830 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.415442 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 13:25:59.415535 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 13:25:59.466652 932742 cri.go:89] found id: "b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a"
I0224 13:25:59.466717 932742 cri.go:89] found id: "d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba"
I0224 13:25:59.466736 932742 cri.go:89] found id: ""
I0224 13:25:59.466756 932742 logs.go:282] 2 containers: [b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba]
I0224 13:25:59.466828 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.471135 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.474705 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 13:25:59.474806 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 13:25:59.510974 932742 cri.go:89] found id: "30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1"
I0224 13:25:59.510997 932742 cri.go:89] found id: "91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a"
I0224 13:25:59.511002 932742 cri.go:89] found id: ""
I0224 13:25:59.511010 932742 logs.go:282] 2 containers: [30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1 91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a]
I0224 13:25:59.511068 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.514907 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.519401 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 13:25:59.519522 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 13:25:59.557493 932742 cri.go:89] found id: "f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4"
I0224 13:25:59.557563 932742 cri.go:89] found id: "57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b"
I0224 13:25:59.557579 932742 cri.go:89] found id: ""
I0224 13:25:59.557659 932742 logs.go:282] 2 containers: [f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4 57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b]
I0224 13:25:59.557719 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.561423 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.565115 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 13:25:59.565234 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 13:25:59.602993 932742 cri.go:89] found id: "fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb"
I0224 13:25:59.603019 932742 cri.go:89] found id: "0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b"
I0224 13:25:59.603024 932742 cri.go:89] found id: ""
I0224 13:25:59.603031 932742 logs.go:282] 2 containers: [fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb 0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b]
I0224 13:25:59.603094 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.607412 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.610866 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0224 13:25:59.610944 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0224 13:25:59.648527 932742 cri.go:89] found id: "a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b"
I0224 13:25:59.648552 932742 cri.go:89] found id: ""
I0224 13:25:59.648561 932742 logs.go:282] 1 containers: [a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b]
I0224 13:25:59.648620 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.652571 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 13:25:59.652641 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 13:25:59.690852 932742 cri.go:89] found id: "ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79"
I0224 13:25:59.690879 932742 cri.go:89] found id: "1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5"
I0224 13:25:59.690884 932742 cri.go:89] found id: ""
I0224 13:25:59.690891 932742 logs.go:282] 2 containers: [ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79 1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5]
I0224 13:25:59.690952 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.694911 932742 ssh_runner.go:195] Run: which crictl
I0224 13:25:59.698338 932742 logs.go:123] Gathering logs for etcd [99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499] ...
I0224 13:25:59.698363 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499"
I0224 13:25:59.750072 932742 logs.go:123] Gathering logs for kube-controller-manager [57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b] ...
I0224 13:25:59.750143 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b"
I0224 13:25:59.808757 932742 logs.go:123] Gathering logs for kubernetes-dashboard [a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b] ...
I0224 13:25:59.808790 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b"
I0224 13:25:59.849343 932742 logs.go:123] Gathering logs for storage-provisioner [ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79] ...
I0224 13:25:59.849375 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79"
I0224 13:25:59.902665 932742 logs.go:123] Gathering logs for kubelet ...
I0224 13:25:59.902693 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0224 13:25:59.990581 932742 logs.go:123] Gathering logs for kube-apiserver [70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661] ...
I0224 13:25:59.990619 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661"
I0224 13:26:00.129236 932742 logs.go:123] Gathering logs for etcd [3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d] ...
I0224 13:26:00.129280 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d"
I0224 13:26:00.213990 932742 logs.go:123] Gathering logs for kube-proxy [30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1] ...
I0224 13:26:00.214033 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1"
I0224 13:26:00.282510 932742 logs.go:123] Gathering logs for kube-proxy [91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a] ...
I0224 13:26:00.282540 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a"
I0224 13:26:00.335327 932742 logs.go:123] Gathering logs for kindnet [0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b] ...
I0224 13:26:00.335359 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b"
I0224 13:26:00.387812 932742 logs.go:123] Gathering logs for kindnet [fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb] ...
I0224 13:26:00.387841 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb"
I0224 13:26:00.451255 932742 logs.go:123] Gathering logs for container status ...
I0224 13:26:00.451289 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 13:26:00.507882 932742 logs.go:123] Gathering logs for dmesg ...
I0224 13:26:00.507911 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 13:26:00.532221 932742 logs.go:123] Gathering logs for kube-apiserver [c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e] ...
I0224 13:26:00.532250 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e"
I0224 13:26:00.585139 932742 logs.go:123] Gathering logs for coredns [b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4] ...
I0224 13:26:00.585174 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4"
I0224 13:26:00.641306 932742 logs.go:123] Gathering logs for kube-scheduler [b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a] ...
I0224 13:26:00.641388 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a"
I0224 13:26:00.679742 932742 logs.go:123] Gathering logs for kube-scheduler [d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba] ...
I0224 13:26:00.679820 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba"
I0224 13:26:00.731594 932742 logs.go:123] Gathering logs for kube-controller-manager [f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4] ...
I0224 13:26:00.731629 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4"
I0224 13:26:00.806141 932742 logs.go:123] Gathering logs for describe nodes ...
I0224 13:26:00.806181 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 13:26:00.928866 932742 logs.go:123] Gathering logs for coredns [a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b] ...
I0224 13:26:00.928901 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b"
I0224 13:26:00.977142 932742 logs.go:123] Gathering logs for storage-provisioner [1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5] ...
I0224 13:26:00.977178 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5"
I0224 13:26:01.016837 932742 logs.go:123] Gathering logs for containerd ...
I0224 13:26:01.016866 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 13:26:01.922828 927252 logs.go:123] Gathering logs for kube-controller-manager [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9] ...
I0224 13:26:01.922853 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9"
I0224 13:26:02.000213 927252 logs.go:123] Gathering logs for kube-controller-manager [f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d] ...
I0224 13:26:02.000253 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:26:02.071302 927252 logs.go:123] Gathering logs for storage-provisioner [6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992] ...
I0224 13:26:02.071346 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992"
I0224 13:26:02.113257 927252 logs.go:123] Gathering logs for kube-apiserver [a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa] ...
I0224 13:26:02.113286 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:26:02.185372 927252 logs.go:123] Gathering logs for etcd [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf] ...
I0224 13:26:02.185407 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf"
I0224 13:26:02.230470 927252 logs.go:123] Gathering logs for etcd [f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8] ...
I0224 13:26:02.230502 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:26:02.275593 927252 logs.go:123] Gathering logs for container status ...
I0224 13:26:02.275623 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 13:26:02.331810 927252 logs.go:123] Gathering logs for kube-apiserver [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a] ...
I0224 13:26:02.331841 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a"
I0224 13:26:02.410814 927252 logs.go:123] Gathering logs for coredns [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c] ...
I0224 13:26:02.410849 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c"
I0224 13:26:02.460154 927252 logs.go:123] Gathering logs for kube-scheduler [46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73] ...
I0224 13:26:02.460181 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:26:02.503412 927252 logs.go:123] Gathering logs for storage-provisioner [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d] ...
I0224 13:26:02.503441 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d"
I0224 13:26:02.548329 927252 logs.go:123] Gathering logs for containerd ...
I0224 13:26:02.548358 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 13:26:02.609571 927252 logs.go:123] Gathering logs for dmesg ...
I0224 13:26:02.609615 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 13:26:02.626935 927252 logs.go:123] Gathering logs for coredns [9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5] ...
I0224 13:26:02.626964 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:26:02.672895 927252 logs.go:123] Gathering logs for kubernetes-dashboard [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419] ...
I0224 13:26:02.672924 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419"
I0224 13:26:02.711480 927252 logs.go:123] Gathering logs for kube-proxy [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01] ...
I0224 13:26:02.711510 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01"
I0224 13:26:02.750943 927252 logs.go:123] Gathering logs for kube-proxy [bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3] ...
I0224 13:26:02.750972 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:26:02.789084 927252 logs.go:123] Gathering logs for kindnet [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597] ...
I0224 13:26:02.789119 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597"
I0224 13:26:02.843974 927252 logs.go:123] Gathering logs for kindnet [1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7] ...
I0224 13:26:02.844003 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:26:02.887921 927252 logs.go:123] Gathering logs for kubelet ...
I0224 13:26:02.887952 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0224 13:26:02.947275 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:39 old-k8s-version-041199 kubelet[667]: E0224 13:20:39.112850 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.947507 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:39 old-k8s-version-041199 kubelet[667]: E0224 13:20:39.658396 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.950434 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:50 old-k8s-version-041199 kubelet[667]: E0224 13:20:50.259925 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.952740 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:01 old-k8s-version-041199 kubelet[667]: E0224 13:21:01.758905 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.953084 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:02 old-k8s-version-041199 kubelet[667]: E0224 13:21:02.770204 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.953639 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:04 old-k8s-version-041199 kubelet[667]: E0224 13:21:04.241853 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.954082 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:09 old-k8s-version-041199 kubelet[667]: E0224 13:21:09.799481 667 pod_workers.go:191] Error syncing pod 6a90578d-b6eb-41b6-8f00-06711366057b ("storage-provisioner_kube-system(6a90578d-b6eb-41b6-8f00-06711366057b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6a90578d-b6eb-41b6-8f00-06711366057b)"
W0224 13:26:02.954415 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:10 old-k8s-version-041199 kubelet[667]: E0224 13:21:10.834981 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.957258 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:16 old-k8s-version-041199 kubelet[667]: E0224 13:21:16.250896 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.957999 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:25 old-k8s-version-041199 kubelet[667]: E0224 13:21:25.847989 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.958327 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:30 old-k8s-version-041199 kubelet[667]: E0224 13:21:30.835028 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.958532 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:31 old-k8s-version-041199 kubelet[667]: E0224 13:21:31.242247 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.958720 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:43 old-k8s-version-041199 kubelet[667]: E0224 13:21:43.241958 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.959309 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:46 old-k8s-version-041199 kubelet[667]: E0224 13:21:46.925238 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.959637 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:50 old-k8s-version-041199 kubelet[667]: E0224 13:21:50.836033 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.959820 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:56 old-k8s-version-041199 kubelet[667]: E0224 13:21:56.241820 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.960155 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:03 old-k8s-version-041199 kubelet[667]: E0224 13:22:03.245426 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.962600 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:10 old-k8s-version-041199 kubelet[667]: E0224 13:22:10.250794 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.962927 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:15 old-k8s-version-041199 kubelet[667]: E0224 13:22:15.241830 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.963112 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:21 old-k8s-version-041199 kubelet[667]: E0224 13:22:21.243021 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.963440 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:26 old-k8s-version-041199 kubelet[667]: E0224 13:22:26.241985 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.963625 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:35 old-k8s-version-041199 kubelet[667]: E0224 13:22:35.242040 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.964213 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:41 old-k8s-version-041199 kubelet[667]: E0224 13:22:41.077125 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.964398 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:50 old-k8s-version-041199 kubelet[667]: E0224 13:22:50.241885 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.964726 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:50 old-k8s-version-041199 kubelet[667]: E0224 13:22:50.835068 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.964911 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:01 old-k8s-version-041199 kubelet[667]: E0224 13:23:01.245324 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.965236 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:02 old-k8s-version-041199 kubelet[667]: E0224 13:23:02.241145 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.965564 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:14 old-k8s-version-041199 kubelet[667]: E0224 13:23:14.241413 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.965756 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:15 old-k8s-version-041199 kubelet[667]: E0224 13:23:15.242710 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.966085 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:28 old-k8s-version-041199 kubelet[667]: E0224 13:23:28.243135 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.966267 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:30 old-k8s-version-041199 kubelet[667]: E0224 13:23:30.241731 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.966615 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:40 old-k8s-version-041199 kubelet[667]: E0224 13:23:40.241083 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.969042 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:43 old-k8s-version-041199 kubelet[667]: E0224 13:23:43.253359 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:02.969409 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:54 old-k8s-version-041199 kubelet[667]: E0224 13:23:54.246604 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.969644 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:58 old-k8s-version-041199 kubelet[667]: E0224 13:23:58.241520 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.969961 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:09 old-k8s-version-041199 kubelet[667]: E0224 13:24:09.241847 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.970419 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:09 old-k8s-version-041199 kubelet[667]: E0224 13:24:09.331557 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.970746 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:10 old-k8s-version-041199 kubelet[667]: E0224 13:24:10.835264 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.970928 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:23 old-k8s-version-041199 kubelet[667]: E0224 13:24:23.242791 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.971253 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:25 old-k8s-version-041199 kubelet[667]: E0224 13:24:25.241309 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.971438 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:35 old-k8s-version-041199 kubelet[667]: E0224 13:24:35.243051 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.971770 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:40 old-k8s-version-041199 kubelet[667]: E0224 13:24:40.241570 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.971960 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:49 old-k8s-version-041199 kubelet[667]: E0224 13:24:49.241522 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.972284 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:54 old-k8s-version-041199 kubelet[667]: E0224 13:24:54.241230 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.972468 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:00 old-k8s-version-041199 kubelet[667]: E0224 13:25:00.248099 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.972793 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:05 old-k8s-version-041199 kubelet[667]: E0224 13:25:05.247533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.972976 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:15 old-k8s-version-041199 kubelet[667]: E0224 13:25:15.242757 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.973304 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:18 old-k8s-version-041199 kubelet[667]: E0224 13:25:18.241668 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.973487 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:29 old-k8s-version-041199 kubelet[667]: E0224 13:25:29.245663 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.973821 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: E0224 13:25:32.241514 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.974147 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.241131 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.974331 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:02.974656 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:02.974839 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0224 13:26:02.974850 927252 logs.go:123] Gathering logs for describe nodes ...
I0224 13:26:02.974865 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 13:26:03.140335 927252 logs.go:123] Gathering logs for kube-scheduler [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd] ...
I0224 13:26:03.140365 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd"
I0224 13:26:03.182100 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:26:03.182131 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0224 13:26:03.182203 927252 out.go:270] X Problems detected in kubelet:
W0224 13:26:03.182220 927252 out.go:270] Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: E0224 13:25:32.241514 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:03.182231 927252 out.go:270] Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.241131 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:03.182239 927252 out.go:270] Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:03.182245 927252 out.go:270] Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:03.182251 927252 out.go:270] Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0224 13:26:03.182442 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:26:03.182459 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 13:26:03.579854 932742 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0224 13:26:03.588071 932742 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0224 13:26:03.589147 932742 api_server.go:141] control plane version: v1.32.2
I0224 13:26:03.589179 932742 api_server.go:131] duration metric: took 4.324205586s to wait for apiserver health ...
I0224 13:26:03.589188 932742 system_pods.go:43] waiting for kube-system pods to appear ...
I0224 13:26:03.589214 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 13:26:03.589276 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 13:26:03.631196 932742 cri.go:89] found id: "70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661"
I0224 13:26:03.631222 932742 cri.go:89] found id: "c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e"
I0224 13:26:03.631227 932742 cri.go:89] found id: ""
I0224 13:26:03.631236 932742 logs.go:282] 2 containers: [70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661 c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e]
I0224 13:26:03.631292 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.635098 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.638682 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 13:26:03.638763 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 13:26:03.687131 932742 cri.go:89] found id: "3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d"
I0224 13:26:03.687166 932742 cri.go:89] found id: "99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499"
I0224 13:26:03.687171 932742 cri.go:89] found id: ""
I0224 13:26:03.687180 932742 logs.go:282] 2 containers: [3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d 99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499]
I0224 13:26:03.687248 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.691056 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.694554 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 13:26:03.694624 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 13:26:03.739244 932742 cri.go:89] found id: "a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b"
I0224 13:26:03.739266 932742 cri.go:89] found id: "b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4"
I0224 13:26:03.739272 932742 cri.go:89] found id: ""
I0224 13:26:03.739279 932742 logs.go:282] 2 containers: [a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4]
I0224 13:26:03.739339 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.742977 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.747459 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 13:26:03.747534 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 13:26:03.789467 932742 cri.go:89] found id: "b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a"
I0224 13:26:03.789494 932742 cri.go:89] found id: "d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba"
I0224 13:26:03.789500 932742 cri.go:89] found id: ""
I0224 13:26:03.789507 932742 logs.go:282] 2 containers: [b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba]
I0224 13:26:03.789568 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.793630 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.797239 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 13:26:03.797315 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 13:26:03.841370 932742 cri.go:89] found id: "30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1"
I0224 13:26:03.841393 932742 cri.go:89] found id: "91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a"
I0224 13:26:03.841398 932742 cri.go:89] found id: ""
I0224 13:26:03.841406 932742 logs.go:282] 2 containers: [30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1 91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a]
I0224 13:26:03.841467 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.845318 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.849004 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 13:26:03.849081 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 13:26:03.886477 932742 cri.go:89] found id: "f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4"
I0224 13:26:03.886500 932742 cri.go:89] found id: "57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b"
I0224 13:26:03.886505 932742 cri.go:89] found id: ""
I0224 13:26:03.886513 932742 logs.go:282] 2 containers: [f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4 57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b]
I0224 13:26:03.886570 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.890410 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.894019 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 13:26:03.894160 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 13:26:03.939370 932742 cri.go:89] found id: "fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb"
I0224 13:26:03.939444 932742 cri.go:89] found id: "0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b"
I0224 13:26:03.939457 932742 cri.go:89] found id: ""
I0224 13:26:03.939466 932742 logs.go:282] 2 containers: [fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb 0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b]
I0224 13:26:03.939538 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.944497 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:03.948447 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0224 13:26:03.948523 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0224 13:26:04.004665 932742 cri.go:89] found id: "a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b"
I0224 13:26:04.004694 932742 cri.go:89] found id: ""
I0224 13:26:04.004703 932742 logs.go:282] 1 containers: [a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b]
I0224 13:26:04.004770 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:04.009318 932742 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 13:26:04.009420 932742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 13:26:04.049477 932742 cri.go:89] found id: "ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79"
I0224 13:26:04.049502 932742 cri.go:89] found id: "1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5"
I0224 13:26:04.049508 932742 cri.go:89] found id: ""
I0224 13:26:04.049516 932742 logs.go:282] 2 containers: [ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79 1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5]
I0224 13:26:04.049574 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:04.053249 932742 ssh_runner.go:195] Run: which crictl
I0224 13:26:04.057022 932742 logs.go:123] Gathering logs for dmesg ...
I0224 13:26:04.057049 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 13:26:04.074573 932742 logs.go:123] Gathering logs for kube-apiserver [70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661] ...
I0224 13:26:04.074599 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70b427fce61a2580798e9ac0eb8f4fe2de774310f11f5ca6998e890f91a26661"
I0224 13:26:04.134357 932742 logs.go:123] Gathering logs for etcd [3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d] ...
I0224 13:26:04.134389 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8f1773c54597a7102d566452c05ac5b63f296cc62ed9114cf7366a5568903d"
I0224 13:26:04.179790 932742 logs.go:123] Gathering logs for kube-controller-manager [f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4] ...
I0224 13:26:04.179821 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8234e8b9f496906010f1692f69e0f26ce340a73a95dc502390c657e7f1b69f4"
I0224 13:26:04.248520 932742 logs.go:123] Gathering logs for kindnet [fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb] ...
I0224 13:26:04.248552 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdeb1a61a146999283512cb8cfbd4f7c50b6bb93327510eee4f03e6bfa5483eb"
I0224 13:26:04.290719 932742 logs.go:123] Gathering logs for storage-provisioner [1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5] ...
I0224 13:26:04.290750 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1de5415990fa00b22adca41801e51949f46c43c83e700af56a6378f0ade8d9e5"
I0224 13:26:04.338030 932742 logs.go:123] Gathering logs for kubelet ...
I0224 13:26:04.338059 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0224 13:26:04.420454 932742 logs.go:123] Gathering logs for coredns [b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4] ...
I0224 13:26:04.420489 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b924910d51da5b8484e1841c7c12a96b9f1df33c1955d9dafb15cabc851c5dd4"
I0224 13:26:04.469368 932742 logs.go:123] Gathering logs for kube-scheduler [b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a] ...
I0224 13:26:04.469398 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2e93938ec1862b59118e99ffd91b7be5d751d83fd7a6b52fa7c0b2dd8a57b7a"
I0224 13:26:04.516180 932742 logs.go:123] Gathering logs for kindnet [0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b] ...
I0224 13:26:04.516209 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc00fd73bb25b7087cbe5b1ea99253ae6199ba1906cec74275af2f9f81cb98b"
I0224 13:26:04.562524 932742 logs.go:123] Gathering logs for storage-provisioner [ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79] ...
I0224 13:26:04.562549 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba3d1a478be0ca9bed0a9733510c21cceba11df3ec1a2424de252837bebb0e79"
I0224 13:26:04.601026 932742 logs.go:123] Gathering logs for describe nodes ...
I0224 13:26:04.601052 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 13:26:04.724814 932742 logs.go:123] Gathering logs for etcd [99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499] ...
I0224 13:26:04.724847 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99c90bea312cf31a010f6b4fadb6c65fe8c4907c2b61ff939fd3a04109fee499"
I0224 13:26:04.771784 932742 logs.go:123] Gathering logs for coredns [a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b] ...
I0224 13:26:04.771821 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0a6a254b1d27dd7d29bc3e22af4d974dfc5e36e1cdc8496d772ab18104c732b"
I0224 13:26:04.821852 932742 logs.go:123] Gathering logs for kube-proxy [30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1] ...
I0224 13:26:04.821883 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30127f0eebbfd10703c73c75d20cdd7ed7516e01ebbf6609a9e8664776a13ee1"
I0224 13:26:04.862932 932742 logs.go:123] Gathering logs for kube-proxy [91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a] ...
I0224 13:26:04.862958 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91e0250dbda67726cdf52afa74c6289e9390914a2222303cccfb018cdf46138a"
I0224 13:26:04.901896 932742 logs.go:123] Gathering logs for container status ...
I0224 13:26:04.901923 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 13:26:04.946852 932742 logs.go:123] Gathering logs for kube-apiserver [c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e] ...
I0224 13:26:04.946881 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ed160b605a2db044bbce4037e1b02615177b75059742b4ff25db5b209997e"
I0224 13:26:04.994025 932742 logs.go:123] Gathering logs for kube-controller-manager [57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b] ...
I0224 13:26:04.994057 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57644e6957f454f9ad5b6a0fe1ba081a4ff2b185a3f945a8bb9fb1a1de62d98b"
I0224 13:26:05.062183 932742 logs.go:123] Gathering logs for kubernetes-dashboard [a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b] ...
I0224 13:26:05.062220 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c9a1f94524ea487461819b8a25c3189c720fef50424ee8d9db38c38de8cd8b"
I0224 13:26:05.105439 932742 logs.go:123] Gathering logs for containerd ...
I0224 13:26:05.105469 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 13:26:05.161890 932742 logs.go:123] Gathering logs for kube-scheduler [d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba] ...
I0224 13:26:05.161923 932742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d945e8f8025e50043ba8e36a126934fabd1e9a985f5fd6fae6360f01b7b68aba"
I0224 13:26:07.719325 932742 system_pods.go:59] 9 kube-system pods found
I0224 13:26:07.719366 932742 system_pods.go:61] "coredns-668d6bf9bc-x7p66" [70ac52ea-3347-493a-8765-76e772228d64] Running
I0224 13:26:07.719373 932742 system_pods.go:61] "etcd-no-preload-037941" [d4c09cb9-53bb-4465-951c-9c0c2930b419] Running
I0224 13:26:07.719377 932742 system_pods.go:61] "kindnet-dg6nl" [53bc76ea-e8dc-4fe4-8aca-69e9c88f2930] Running
I0224 13:26:07.719383 932742 system_pods.go:61] "kube-apiserver-no-preload-037941" [6a2fb797-f369-431a-9050-87230492b2ff] Running
I0224 13:26:07.719387 932742 system_pods.go:61] "kube-controller-manager-no-preload-037941" [fccdea28-4474-4216-9ecb-306fbab3af95] Running
I0224 13:26:07.719390 932742 system_pods.go:61] "kube-proxy-p6xtb" [07bb9533-9c4b-4680-9a49-f5c83aae1dda] Running
I0224 13:26:07.719394 932742 system_pods.go:61] "kube-scheduler-no-preload-037941" [598a7fbc-7989-4de8-bf7c-866e149aafce] Running
I0224 13:26:07.719401 932742 system_pods.go:61] "metrics-server-f79f97bbb-8d9mq" [87e10739-05ce-4ead-b231-bfda54625b23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0224 13:26:07.719411 932742 system_pods.go:61] "storage-provisioner" [242ea5c4-4efe-49ae-acfa-faf73b62a702] Running
I0224 13:26:07.719423 932742 system_pods.go:74] duration metric: took 4.130227944s to wait for pod list to return data ...
I0224 13:26:07.719438 932742 default_sa.go:34] waiting for default service account to be created ...
I0224 13:26:07.721914 932742 default_sa.go:45] found service account: "default"
I0224 13:26:07.721941 932742 default_sa.go:55] duration metric: took 2.496388ms for default service account to be created ...
I0224 13:26:07.721950 932742 system_pods.go:116] waiting for k8s-apps to be running ...
I0224 13:26:07.724526 932742 system_pods.go:86] 9 kube-system pods found
I0224 13:26:07.724559 932742 system_pods.go:89] "coredns-668d6bf9bc-x7p66" [70ac52ea-3347-493a-8765-76e772228d64] Running
I0224 13:26:07.724566 932742 system_pods.go:89] "etcd-no-preload-037941" [d4c09cb9-53bb-4465-951c-9c0c2930b419] Running
I0224 13:26:07.724571 932742 system_pods.go:89] "kindnet-dg6nl" [53bc76ea-e8dc-4fe4-8aca-69e9c88f2930] Running
I0224 13:26:07.724576 932742 system_pods.go:89] "kube-apiserver-no-preload-037941" [6a2fb797-f369-431a-9050-87230492b2ff] Running
I0224 13:26:07.724580 932742 system_pods.go:89] "kube-controller-manager-no-preload-037941" [fccdea28-4474-4216-9ecb-306fbab3af95] Running
I0224 13:26:07.724585 932742 system_pods.go:89] "kube-proxy-p6xtb" [07bb9533-9c4b-4680-9a49-f5c83aae1dda] Running
I0224 13:26:07.724589 932742 system_pods.go:89] "kube-scheduler-no-preload-037941" [598a7fbc-7989-4de8-bf7c-866e149aafce] Running
I0224 13:26:07.724596 932742 system_pods.go:89] "metrics-server-f79f97bbb-8d9mq" [87e10739-05ce-4ead-b231-bfda54625b23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0224 13:26:07.724602 932742 system_pods.go:89] "storage-provisioner" [242ea5c4-4efe-49ae-acfa-faf73b62a702] Running
I0224 13:26:07.724610 932742 system_pods.go:126] duration metric: took 2.653544ms to wait for k8s-apps to be running ...
I0224 13:26:07.724620 932742 system_svc.go:44] waiting for kubelet service to be running ....
I0224 13:26:07.724680 932742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0224 13:26:07.737364 932742 system_svc.go:56] duration metric: took 12.734139ms WaitForService to wait for kubelet
I0224 13:26:07.737394 932742 kubeadm.go:582] duration metric: took 4m26.752866842s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0224 13:26:07.737415 932742 node_conditions.go:102] verifying NodePressure condition ...
I0224 13:26:07.740236 932742 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0224 13:26:07.740272 932742 node_conditions.go:123] node cpu capacity is 2
I0224 13:26:07.740287 932742 node_conditions.go:105] duration metric: took 2.865617ms to run NodePressure ...
I0224 13:26:07.740300 932742 start.go:241] waiting for startup goroutines ...
I0224 13:26:07.740308 932742 start.go:246] waiting for cluster config update ...
I0224 13:26:07.740319 932742 start.go:255] writing updated cluster config ...
I0224 13:26:07.740629 932742 ssh_runner.go:195] Run: rm -f paused
I0224 13:26:07.805989 932742 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
I0224 13:26:07.811188 932742 out.go:177] * Done! kubectl is now configured to use "no-preload-037941" cluster and "default" namespace by default
I0224 13:26:13.182775 927252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0224 13:26:13.195315 927252 api_server.go:72] duration metric: took 5m58.004084642s to wait for apiserver process to appear ...
I0224 13:26:13.195342 927252 api_server.go:88] waiting for apiserver healthz status ...
I0224 13:26:13.195379 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0224 13:26:13.195438 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0224 13:26:13.236507 927252 cri.go:89] found id: "43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a"
I0224 13:26:13.236529 927252 cri.go:89] found id: "a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:26:13.236534 927252 cri.go:89] found id: ""
I0224 13:26:13.236542 927252 logs.go:282] 2 containers: [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa]
I0224 13:26:13.236606 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.240470 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.245362 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0224 13:26:13.245433 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0224 13:26:13.286760 927252 cri.go:89] found id: "ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf"
I0224 13:26:13.286786 927252 cri.go:89] found id: "f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:26:13.286790 927252 cri.go:89] found id: ""
I0224 13:26:13.286798 927252 logs.go:282] 2 containers: [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8]
I0224 13:26:13.286857 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.291304 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.295148 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0224 13:26:13.295220 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0224 13:26:13.340080 927252 cri.go:89] found id: "911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c"
I0224 13:26:13.340103 927252 cri.go:89] found id: "9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:26:13.340108 927252 cri.go:89] found id: ""
I0224 13:26:13.340116 927252 logs.go:282] 2 containers: [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c 9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5]
I0224 13:26:13.340176 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.344114 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.347453 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0224 13:26:13.347528 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0224 13:26:13.387402 927252 cri.go:89] found id: "e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd"
I0224 13:26:13.387426 927252 cri.go:89] found id: "46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:26:13.387431 927252 cri.go:89] found id: ""
I0224 13:26:13.387440 927252 logs.go:282] 2 containers: [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd 46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73]
I0224 13:26:13.387498 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.391191 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.394743 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0224 13:26:13.394847 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0224 13:26:13.442658 927252 cri.go:89] found id: "d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01"
I0224 13:26:13.442681 927252 cri.go:89] found id: "bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:26:13.442686 927252 cri.go:89] found id: ""
I0224 13:26:13.442694 927252 logs.go:282] 2 containers: [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01 bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3]
I0224 13:26:13.442749 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.446724 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.451089 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0224 13:26:13.451161 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0224 13:26:13.494590 927252 cri.go:89] found id: "25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9"
I0224 13:26:13.494659 927252 cri.go:89] found id: "f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:26:13.494677 927252 cri.go:89] found id: ""
I0224 13:26:13.494697 927252 logs.go:282] 2 containers: [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9 f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d]
I0224 13:26:13.494786 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.498342 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.501762 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0224 13:26:13.501849 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0224 13:26:13.556809 927252 cri.go:89] found id: "9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597"
I0224 13:26:13.556832 927252 cri.go:89] found id: "1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:26:13.556838 927252 cri.go:89] found id: ""
I0224 13:26:13.556845 927252 logs.go:282] 2 containers: [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597 1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7]
I0224 13:26:13.556929 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.560948 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.564523 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0224 13:26:13.564600 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0224 13:26:13.619145 927252 cri.go:89] found id: "061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419"
I0224 13:26:13.619169 927252 cri.go:89] found id: ""
I0224 13:26:13.619177 927252 logs.go:282] 1 containers: [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419]
I0224 13:26:13.619250 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.622662 927252 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0224 13:26:13.622754 927252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0224 13:26:13.667120 927252 cri.go:89] found id: "3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d"
I0224 13:26:13.667144 927252 cri.go:89] found id: "6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992"
I0224 13:26:13.667149 927252 cri.go:89] found id: ""
I0224 13:26:13.667156 927252 logs.go:282] 2 containers: [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d 6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992]
I0224 13:26:13.667221 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.670885 927252 ssh_runner.go:195] Run: which crictl
I0224 13:26:13.674330 927252 logs.go:123] Gathering logs for kube-apiserver [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a] ...
I0224 13:26:13.674357 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a"
I0224 13:26:13.728017 927252 logs.go:123] Gathering logs for kube-proxy [bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3] ...
I0224 13:26:13.728053 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3"
I0224 13:26:13.771893 927252 logs.go:123] Gathering logs for kube-controller-manager [f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d] ...
I0224 13:26:13.771923 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d"
I0224 13:26:13.842183 927252 logs.go:123] Gathering logs for kindnet [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597] ...
I0224 13:26:13.842220 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597"
I0224 13:26:13.884645 927252 logs.go:123] Gathering logs for kindnet [1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7] ...
I0224 13:26:13.884674 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7"
I0224 13:26:13.932797 927252 logs.go:123] Gathering logs for dmesg ...
I0224 13:26:13.932824 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 13:26:13.951072 927252 logs.go:123] Gathering logs for kube-apiserver [a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa] ...
I0224 13:26:13.951104 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa"
I0224 13:26:14.025649 927252 logs.go:123] Gathering logs for etcd [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf] ...
I0224 13:26:14.025696 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf"
I0224 13:26:14.069785 927252 logs.go:123] Gathering logs for coredns [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c] ...
I0224 13:26:14.069816 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c"
I0224 13:26:14.109379 927252 logs.go:123] Gathering logs for storage-provisioner [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d] ...
I0224 13:26:14.109417 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d"
I0224 13:26:14.156027 927252 logs.go:123] Gathering logs for storage-provisioner [6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992] ...
I0224 13:26:14.156063 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992"
I0224 13:26:14.214626 927252 logs.go:123] Gathering logs for kubelet ...
I0224 13:26:14.214661 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0224 13:26:14.278321 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:39 old-k8s-version-041199 kubelet[667]: E0224 13:20:39.112850 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.278526 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:39 old-k8s-version-041199 kubelet[667]: E0224 13:20:39.658396 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.281345 927252 logs.go:138] Found kubelet problem: Feb 24 13:20:50 old-k8s-version-041199 kubelet[667]: E0224 13:20:50.259925 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.283488 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:01 old-k8s-version-041199 kubelet[667]: E0224 13:21:01.758905 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.283821 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:02 old-k8s-version-041199 kubelet[667]: E0224 13:21:02.770204 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.284495 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:04 old-k8s-version-041199 kubelet[667]: E0224 13:21:04.241853 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.284943 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:09 old-k8s-version-041199 kubelet[667]: E0224 13:21:09.799481 667 pod_workers.go:191] Error syncing pod 6a90578d-b6eb-41b6-8f00-06711366057b ("storage-provisioner_kube-system(6a90578d-b6eb-41b6-8f00-06711366057b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6a90578d-b6eb-41b6-8f00-06711366057b)"
W0224 13:26:14.285309 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:10 old-k8s-version-041199 kubelet[667]: E0224 13:21:10.834981 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.288170 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:16 old-k8s-version-041199 kubelet[667]: E0224 13:21:16.250896 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.288926 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:25 old-k8s-version-041199 kubelet[667]: E0224 13:21:25.847989 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.289260 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:30 old-k8s-version-041199 kubelet[667]: E0224 13:21:30.835028 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.289447 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:31 old-k8s-version-041199 kubelet[667]: E0224 13:21:31.242247 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.289645 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:43 old-k8s-version-041199 kubelet[667]: E0224 13:21:43.241958 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.290826 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:46 old-k8s-version-041199 kubelet[667]: E0224 13:21:46.925238 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.291169 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:50 old-k8s-version-041199 kubelet[667]: E0224 13:21:50.836033 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.291376 927252 logs.go:138] Found kubelet problem: Feb 24 13:21:56 old-k8s-version-041199 kubelet[667]: E0224 13:21:56.241820 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.291712 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:03 old-k8s-version-041199 kubelet[667]: E0224 13:22:03.245426 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.294286 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:10 old-k8s-version-041199 kubelet[667]: E0224 13:22:10.250794 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.294623 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:15 old-k8s-version-041199 kubelet[667]: E0224 13:22:15.241830 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.294808 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:21 old-k8s-version-041199 kubelet[667]: E0224 13:22:21.243021 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.295137 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:26 old-k8s-version-041199 kubelet[667]: E0224 13:22:26.241985 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.295330 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:35 old-k8s-version-041199 kubelet[667]: E0224 13:22:35.242040 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.295941 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:41 old-k8s-version-041199 kubelet[667]: E0224 13:22:41.077125 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.296126 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:50 old-k8s-version-041199 kubelet[667]: E0224 13:22:50.241885 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.296452 927252 logs.go:138] Found kubelet problem: Feb 24 13:22:50 old-k8s-version-041199 kubelet[667]: E0224 13:22:50.835068 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.296639 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:01 old-k8s-version-041199 kubelet[667]: E0224 13:23:01.245324 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.296964 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:02 old-k8s-version-041199 kubelet[667]: E0224 13:23:02.241145 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.297289 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:14 old-k8s-version-041199 kubelet[667]: E0224 13:23:14.241413 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.297475 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:15 old-k8s-version-041199 kubelet[667]: E0224 13:23:15.242710 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.297815 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:28 old-k8s-version-041199 kubelet[667]: E0224 13:23:28.243135 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.298002 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:30 old-k8s-version-041199 kubelet[667]: E0224 13:23:30.241731 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.298329 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:40 old-k8s-version-041199 kubelet[667]: E0224 13:23:40.241083 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.300780 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:43 old-k8s-version-041199 kubelet[667]: E0224 13:23:43.253359 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0224 13:26:14.301107 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:54 old-k8s-version-041199 kubelet[667]: E0224 13:23:54.246604 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.301290 927252 logs.go:138] Found kubelet problem: Feb 24 13:23:58 old-k8s-version-041199 kubelet[667]: E0224 13:23:58.241520 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.301638 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:09 old-k8s-version-041199 kubelet[667]: E0224 13:24:09.241847 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.302143 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:09 old-k8s-version-041199 kubelet[667]: E0224 13:24:09.331557 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.302478 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:10 old-k8s-version-041199 kubelet[667]: E0224 13:24:10.835264 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.302663 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:23 old-k8s-version-041199 kubelet[667]: E0224 13:24:23.242791 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.303002 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:25 old-k8s-version-041199 kubelet[667]: E0224 13:24:25.241309 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.303187 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:35 old-k8s-version-041199 kubelet[667]: E0224 13:24:35.243051 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.303513 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:40 old-k8s-version-041199 kubelet[667]: E0224 13:24:40.241570 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.303698 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:49 old-k8s-version-041199 kubelet[667]: E0224 13:24:49.241522 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.304028 927252 logs.go:138] Found kubelet problem: Feb 24 13:24:54 old-k8s-version-041199 kubelet[667]: E0224 13:24:54.241230 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.304212 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:00 old-k8s-version-041199 kubelet[667]: E0224 13:25:00.248099 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.304537 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:05 old-k8s-version-041199 kubelet[667]: E0224 13:25:05.247533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.304723 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:15 old-k8s-version-041199 kubelet[667]: E0224 13:25:15.242757 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.305053 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:18 old-k8s-version-041199 kubelet[667]: E0224 13:25:18.241668 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.305238 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:29 old-k8s-version-041199 kubelet[667]: E0224 13:25:29.245663 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.305563 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: E0224 13:25:32.241514 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.305908 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.241131 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.306094 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.306931 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.307134 927252 logs.go:138] Found kubelet problem: Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.307517 927252 logs.go:138] Found kubelet problem: Feb 24 13:26:07 old-k8s-version-041199 kubelet[667]: E0224 13:26:07.249713 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.307722 927252 logs.go:138] Found kubelet problem: Feb 24 13:26:11 old-k8s-version-041199 kubelet[667]: E0224 13:26:11.241765 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0224 13:26:14.307736 927252 logs.go:123] Gathering logs for etcd [f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8] ...
I0224 13:26:14.307750 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8"
I0224 13:26:14.369693 927252 logs.go:123] Gathering logs for coredns [9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5] ...
I0224 13:26:14.369730 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5"
I0224 13:26:14.412096 927252 logs.go:123] Gathering logs for kube-proxy [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01] ...
I0224 13:26:14.412127 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01"
I0224 13:26:14.467225 927252 logs.go:123] Gathering logs for kubernetes-dashboard [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419] ...
I0224 13:26:14.467253 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419"
I0224 13:26:14.513827 927252 logs.go:123] Gathering logs for containerd ...
I0224 13:26:14.513855 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0224 13:26:14.578597 927252 logs.go:123] Gathering logs for container status ...
I0224 13:26:14.578639 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 13:26:14.626162 927252 logs.go:123] Gathering logs for describe nodes ...
I0224 13:26:14.626194 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0224 13:26:14.767473 927252 logs.go:123] Gathering logs for kube-scheduler [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd] ...
I0224 13:26:14.767505 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd"
I0224 13:26:14.815377 927252 logs.go:123] Gathering logs for kube-scheduler [46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73] ...
I0224 13:26:14.815410 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73"
I0224 13:26:14.860179 927252 logs.go:123] Gathering logs for kube-controller-manager [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9] ...
I0224 13:26:14.860224 927252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9"
I0224 13:26:14.929779 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:26:14.929809 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0224 13:26:14.929865 927252 out.go:270] X Problems detected in kubelet:
W0224 13:26:14.929880 927252 out.go:270] Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.929888 927252 out.go:270] Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.929897 927252 out.go:270] Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0224 13:26:14.929911 927252 out.go:270] Feb 24 13:26:07 old-k8s-version-041199 kubelet[667]: E0224 13:26:07.249713 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
W0224 13:26:14.929919 927252 out.go:270] Feb 24 13:26:11 old-k8s-version-041199 kubelet[667]: E0224 13:26:11.241765 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0224 13:26:14.929928 927252 out.go:358] Setting ErrFile to fd 2...
I0224 13:26:14.929934 927252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 13:26:24.931888 927252 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0224 13:26:24.944372 927252 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0224 13:26:24.952204 927252 out.go:201]
W0224 13:26:24.958023 927252 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0224 13:26:24.958235 927252 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0224 13:26:24.958300 927252 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0224 13:26:24.958333 927252 out.go:270] *
W0224 13:26:24.963537 927252 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0224 13:26:24.967780 927252 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
2d76c925b8ec5 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 854638e1f9cfb dashboard-metrics-scraper-8d5bb5db8-w8s9j
3385223105aad ba04bb24b9575 5 minutes ago Running storage-provisioner 2 adbd92ec3a84d storage-provisioner
061e5eb8df210 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 a40ebe50befb9 kubernetes-dashboard-cd95d586-gr4bd
5061982a55059 1611cd07b61d5 5 minutes ago Running busybox 1 a48c2ec56aa5c busybox
911c5999e3003 db91994f4ee8f 5 minutes ago Running coredns 1 6abe568611d3f coredns-74ff55c5b-9947z
d5ae265382dbb 25a5233254979 5 minutes ago Running kube-proxy 1 c939a8be70cb2 kube-proxy-gxpjd
9e5558150a84d ee75e27fff91c 5 minutes ago Running kindnet-cni 1 30b2eb578e73b kindnet-jdh7t
6cd982738bc8c ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 adbd92ec3a84d storage-provisioner
43e1b0af6b5d3 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 f6009117b1fa2 kube-apiserver-old-k8s-version-041199
ed6b4406e79a0 05b738aa1bc63 6 minutes ago Running etcd 1 3321ab6c90978 etcd-old-k8s-version-041199
25071595e4dc8 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 445a2dd1b1954 kube-controller-manager-old-k8s-version-041199
e32638610f31c e7605f88f17d6 6 minutes ago Running kube-scheduler 1 2d0f59d1de491 kube-scheduler-old-k8s-version-041199
0611df2b75bce 1611cd07b61d5 6 minutes ago Exited busybox 0 538dbd7a00b17 busybox
9ad8c88a33bb0 db91994f4ee8f 7 minutes ago Exited coredns 0 fcb442bdfb1d1 coredns-74ff55c5b-9947z
1abd4f352076d ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 dd5b15ef36a1a kindnet-jdh7t
bbc9e43f68288 25a5233254979 8 minutes ago Exited kube-proxy 0 41cf12dd1154f kube-proxy-gxpjd
f7dcccd0ed14d 05b738aa1bc63 8 minutes ago Exited etcd 0 08f4b2147d198 etcd-old-k8s-version-041199
46d4401a32810 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 ccd0b4e6bf8f6 kube-scheduler-old-k8s-version-041199
f650e17ddac77 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 bb5122d192328 kube-controller-manager-old-k8s-version-041199
a2b750ff9019b 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 bc4ae265985e8 kube-apiserver-old-k8s-version-041199
==> containerd <==
Feb 24 13:22:10 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:10.250320805Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Feb 24 13:22:40 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:40.255426461Z" level=info msg="CreateContainer within sandbox \"854638e1f9cfb0ee2876af04e80760ebea9962836faaafc67eb4121969a0423f\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Feb 24 13:22:40 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:40.274601747Z" level=info msg="CreateContainer within sandbox \"854638e1f9cfb0ee2876af04e80760ebea9962836faaafc67eb4121969a0423f\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2\""
Feb 24 13:22:40 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:40.275206170Z" level=info msg="StartContainer for \"6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2\""
Feb 24 13:22:40 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:40.353741538Z" level=info msg="StartContainer for \"6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2\" returns successfully"
Feb 24 13:22:40 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:40.356831510Z" level=info msg="received exit event container_id:\"6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2\" id:\"6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2\" pid:3072 exit_status:255 exited_at:{seconds:1740403360 nanos:356411361}"
Feb 24 13:22:40 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:40.386357971Z" level=info msg="shim disconnected" id=6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2 namespace=k8s.io
Feb 24 13:22:40 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:40.386420312Z" level=warning msg="cleaning up after shim disconnected" id=6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2 namespace=k8s.io
Feb 24 13:22:40 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:40.386429609Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 24 13:22:41 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:41.079364018Z" level=info msg="RemoveContainer for \"548cd0c344657a13cec46ade7ea86f2ad3d22418924462ddd69121f93374d8df\""
Feb 24 13:22:41 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:22:41.086956684Z" level=info msg="RemoveContainer for \"548cd0c344657a13cec46ade7ea86f2ad3d22418924462ddd69121f93374d8df\" returns successfully"
Feb 24 13:23:43 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:23:43.244538885Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:23:43 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:23:43.249982019Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Feb 24 13:23:43 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:23:43.251989376Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Feb 24 13:23:43 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:23:43.252015796Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Feb 24 13:24:08 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:08.243269553Z" level=info msg="CreateContainer within sandbox \"854638e1f9cfb0ee2876af04e80760ebea9962836faaafc67eb4121969a0423f\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Feb 24 13:24:08 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:08.261515034Z" level=info msg="CreateContainer within sandbox \"854638e1f9cfb0ee2876af04e80760ebea9962836faaafc67eb4121969a0423f\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95\""
Feb 24 13:24:08 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:08.262355086Z" level=info msg="StartContainer for \"2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95\""
Feb 24 13:24:08 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:08.348918705Z" level=info msg="StartContainer for \"2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95\" returns successfully"
Feb 24 13:24:08 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:08.353988778Z" level=info msg="received exit event container_id:\"2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95\" id:\"2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95\" pid:3323 exit_status:255 exited_at:{seconds:1740403448 nanos:353533553}"
Feb 24 13:24:08 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:08.381060526Z" level=info msg="shim disconnected" id=2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95 namespace=k8s.io
Feb 24 13:24:08 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:08.381122744Z" level=warning msg="cleaning up after shim disconnected" id=2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95 namespace=k8s.io
Feb 24 13:24:08 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:08.381133131Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 24 13:24:09 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:09.332779038Z" level=info msg="RemoveContainer for \"6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2\""
Feb 24 13:24:09 old-k8s-version-041199 containerd[574]: time="2025-02-24T13:24:09.340284707Z" level=info msg="RemoveContainer for \"6b840f587511235115236116ee21618ef277734078c1ef3510e096a6f03f86f2\" returns successfully"
==> coredns [911c5999e3003f5703982a4ef8ac5a30142e9cccda3d3b418a4d8d3753b8317c] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:49896 - 31570 "HINFO IN 3886716786717371263.3615482312281897992. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023656532s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0224 13:21:09.507515 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-24 13:20:39.506868918 +0000 UTC m=+0.029978823) (total time: 30.000521932s):
Trace[2019727887]: [30.000521932s] [30.000521932s] END
E0224 13:21:09.507558 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0224 13:21:09.508206 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-24 13:20:39.507396995 +0000 UTC m=+0.030506908) (total time: 30.000786171s):
Trace[939984059]: [30.000786171s] [30.000786171s] END
E0224 13:21:09.508222 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0224 13:21:09.508598 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-24 13:20:39.50820371 +0000 UTC m=+0.031313615) (total time: 30.000376361s):
Trace[911902081]: [30.000376361s] [30.000376361s] END
E0224 13:21:09.508614 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [9ad8c88a33bb06d9bcf11dacb6b91f54326a0435a78dc9137eca692c985e69e5] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:33020 - 2722 "HINFO IN 8733602151648335920.6203681182090823903. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024456395s
==> describe nodes <==
Name: old-k8s-version-041199
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-041199
kubernetes.io/os=linux
minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650
minikube.k8s.io/name=old-k8s-version-041199
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_02_24T13_17_56_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 24 Feb 2025 13:17:52 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-041199
AcquireTime: <unset>
RenewTime: Mon, 24 Feb 2025 13:26:18 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 24 Feb 2025 13:21:27 +0000 Mon, 24 Feb 2025 13:17:46 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 24 Feb 2025 13:21:27 +0000 Mon, 24 Feb 2025 13:17:46 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 24 Feb 2025 13:21:27 +0000 Mon, 24 Feb 2025 13:17:46 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 24 Feb 2025 13:21:27 +0000 Mon, 24 Feb 2025 13:18:11 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-041199
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 616bf4f13fa3414f82aa4072ddd98746
System UUID: 6cb2f2ce-85a1-45fe-8ebe-005ed2198dce
Boot ID: 9e554593-ce57-415a-84fc-83235ad2d3ab
Kernel Version: 5.15.0-1077-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.25
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m46s
kube-system coredns-74ff55c5b-9947z 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m16s
kube-system etcd-old-k8s-version-041199 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m23s
kube-system kindnet-jdh7t 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m16s
kube-system kube-apiserver-old-k8s-version-041199 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m23s
kube-system kube-controller-manager-old-k8s-version-041199 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m23s
kube-system kube-proxy-gxpjd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m16s
kube-system kube-scheduler-old-k8s-version-041199 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m23s
kube-system metrics-server-9975d5f86-4hkkq 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m34s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m14s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-w8s9j 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m32s
kubernetes-dashboard kubernetes-dashboard-cd95d586-gr4bd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m32s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m42s (x5 over 8m42s) kubelet Node old-k8s-version-041199 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m42s (x5 over 8m42s) kubelet Node old-k8s-version-041199 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m42s (x5 over 8m42s) kubelet Node old-k8s-version-041199 status is now: NodeHasSufficientPID
Normal Starting 8m23s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m23s kubelet Node old-k8s-version-041199 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m23s kubelet Node old-k8s-version-041199 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m23s kubelet Node old-k8s-version-041199 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m23s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m16s kubelet Node old-k8s-version-041199 status is now: NodeReady
Normal Starting 8m15s kube-proxy Starting kube-proxy.
Normal Starting 6m4s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m4s (x8 over 6m4s) kubelet Node old-k8s-version-041199 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m4s (x7 over 6m4s) kubelet Node old-k8s-version-041199 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m4s (x8 over 6m4s) kubelet Node old-k8s-version-041199 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m4s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m48s kube-proxy Starting kube-proxy.
==> dmesg <==
[Feb24 12:13] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
==> etcd [ed6b4406e79a029d770cdfed765eb53b2f784053270e4196ea58f466a923ebaf] <==
2025-02-24 13:22:23.114291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:22:33.114265 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:22:43.114414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:22:53.114231 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:23:03.114283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:23:13.114183 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:23:23.114222 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:23:33.114252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:23:43.114282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:23:53.114292 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:24:03.114290 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:24:13.114547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:24:23.114508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:24:33.114161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:24:43.114346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:24:53.114344 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:25:03.114286 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:25:13.114293 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:25:23.114238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:25:33.114264 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:25:43.114288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:25:53.114469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:26:03.114446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:26:13.114302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:26:23.114232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [f7dcccd0ed14d1a0a95fc2ec2aa2169c9162ab53540a80bceabbd20d676e61f8] <==
2025-02-24 13:17:46.050061 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
raft2025/02/24 13:17:46 INFO: ea7e25599daad906 is starting a new election at term 1
raft2025/02/24 13:17:46 INFO: ea7e25599daad906 became candidate at term 2
raft2025/02/24 13:17:46 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/02/24 13:17:46 INFO: ea7e25599daad906 became leader at term 2
raft2025/02/24 13:17:46 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-02-24 13:17:46.427712 I | etcdserver: published {Name:old-k8s-version-041199 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-02-24 13:17:46.427906 I | embed: ready to serve client requests
2025-02-24 13:17:46.429361 I | embed: serving client requests on 192.168.76.2:2379
2025-02-24 13:17:46.429987 I | embed: ready to serve client requests
2025-02-24 13:17:46.432445 I | embed: serving client requests on 127.0.0.1:2379
2025-02-24 13:17:46.437730 I | etcdserver: setting up the initial cluster version to 3.4
2025-02-24 13:17:46.439099 N | etcdserver/membership: set the initial cluster version to 3.4
2025-02-24 13:17:46.440533 I | etcdserver/api: enabled capabilities for version 3.4
2025-02-24 13:18:14.665900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:18:16.436344 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:18:26.436351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:18:36.436377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:18:46.436287 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:18:56.436512 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:19:06.436417 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:19:16.436327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:19:26.436480 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:19:36.437709 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-24 13:19:46.436491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
13:26:27 up 4:08, 0 users, load average: 0.44, 1.79, 2.51
Linux old-k8s-version-041199 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [1abd4f352076ddb1232c08c67d0bcea8823225dcff4e8f1b6e4546626985b2d7] <==
I0224 13:18:14.636430 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I0224 13:18:15.034048 1 controller.go:361] Starting controller kube-network-policies
I0224 13:18:15.034071 1 controller.go:365] Waiting for informer caches to sync
I0224 13:18:15.034093 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0224 13:18:15.234702 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0224 13:18:15.234730 1 metrics.go:61] Registering metrics
I0224 13:18:15.234799 1 controller.go:401] Syncing nftables rules
I0224 13:18:25.035731 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:18:25.035799 1 main.go:301] handling current node
I0224 13:18:35.034502 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:18:35.034539 1 main.go:301] handling current node
I0224 13:18:45.034820 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:18:45.034859 1 main.go:301] handling current node
I0224 13:18:55.043251 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:18:55.043285 1 main.go:301] handling current node
I0224 13:19:05.041722 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:19:05.041823 1 main.go:301] handling current node
I0224 13:19:15.034524 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:19:15.034563 1 main.go:301] handling current node
I0224 13:19:25.037133 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:19:25.037170 1 main.go:301] handling current node
I0224 13:19:35.034923 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:19:35.034962 1 main.go:301] handling current node
I0224 13:19:45.034948 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:19:45.035064 1 main.go:301] handling current node
==> kindnet [9e5558150a84d87e18394eca81b81491aa8a2d4765b3fa39de6ef44d24951597] <==
I0224 13:24:19.738298 1 main.go:301] handling current node
I0224 13:24:29.742518 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:24:29.742609 1 main.go:301] handling current node
I0224 13:24:39.734798 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:24:39.734832 1 main.go:301] handling current node
I0224 13:24:49.741703 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:24:49.741741 1 main.go:301] handling current node
I0224 13:24:59.743003 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:24:59.743038 1 main.go:301] handling current node
I0224 13:25:09.735324 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:25:09.735361 1 main.go:301] handling current node
I0224 13:25:19.742478 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:25:19.742508 1 main.go:301] handling current node
I0224 13:25:29.742504 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:25:29.742537 1 main.go:301] handling current node
I0224 13:25:39.734346 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:25:39.734380 1 main.go:301] handling current node
I0224 13:25:49.742549 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:25:49.742642 1 main.go:301] handling current node
I0224 13:25:59.749797 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:25:59.750018 1 main.go:301] handling current node
I0224 13:26:09.734725 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:26:09.734761 1 main.go:301] handling current node
I0224 13:26:19.741962 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0224 13:26:19.742426 1 main.go:301] handling current node
==> kube-apiserver [43e1b0af6b5d314bf60060219758b4a1884a809bae0da1ac0bf8bce3c0e5859a] <==
I0224 13:22:37.706372 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:22:37.706407 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0224 13:23:21.934550 1 client.go:360] parsed scheme: "passthrough"
I0224 13:23:21.934592 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:23:21.934600 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0224 13:23:39.622409 1 handler_proxy.go:102] no RequestInfo found in the context
E0224 13:23:39.622539 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0224 13:23:39.622575 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0224 13:24:01.602633 1 client.go:360] parsed scheme: "passthrough"
I0224 13:24:01.602679 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:24:01.602689 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0224 13:24:45.217509 1 client.go:360] parsed scheme: "passthrough"
I0224 13:24:45.217566 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:24:45.217575 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0224 13:25:18.267677 1 client.go:360] parsed scheme: "passthrough"
I0224 13:25:18.267729 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:25:18.267738 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0224 13:25:36.940932 1 handler_proxy.go:102] no RequestInfo found in the context
E0224 13:25:36.941193 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0224 13:25:36.941209 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0224 13:25:56.123328 1 client.go:360] parsed scheme: "passthrough"
I0224 13:25:56.123383 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:25:56.123392 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [a2b750ff9019b86b1826887de62ad5451efe151d5f4c2fde60875eda992d79aa] <==
I0224 13:17:53.617128 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0224 13:17:53.617159 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0224 13:17:53.645413 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0224 13:17:53.650303 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0224 13:17:53.650460 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0224 13:17:54.162389 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0224 13:17:54.220817 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0224 13:17:54.356411 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0224 13:17:54.357658 1 controller.go:606] quota admission added evaluator for: endpoints
I0224 13:17:54.363361 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0224 13:17:55.281931 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0224 13:17:56.108483 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0224 13:17:56.218736 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0224 13:18:04.540733 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0224 13:18:11.389308 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0224 13:18:11.393374 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0224 13:18:20.976548 1 client.go:360] parsed scheme: "passthrough"
I0224 13:18:20.976596 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:18:20.976633 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0224 13:18:59.426448 1 client.go:360] parsed scheme: "passthrough"
I0224 13:18:59.426575 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:18:59.426623 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0224 13:19:32.545148 1 client.go:360] parsed scheme: "passthrough"
I0224 13:19:32.545369 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0224 13:19:32.545483 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [25071595e4dc8e7e6512d7e34f9a9d7d62dac34f42c7446e2190fd3bd2cddcf9] <==
W0224 13:22:01.071560 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0224 13:22:27.121936 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0224 13:22:32.722285 1 request.go:655] Throttling request took 1.048444898s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W0224 13:22:33.573976 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0224 13:22:57.623743 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0224 13:23:05.224398 1 request.go:655] Throttling request took 1.048477352s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
W0224 13:23:06.075925 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0224 13:23:28.125759 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0224 13:23:37.726466 1 request.go:655] Throttling request took 1.048247445s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0224 13:23:38.577931 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0224 13:23:58.635613 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0224 13:24:10.228497 1 request.go:655] Throttling request took 1.048231094s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
W0224 13:24:11.080990 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0224 13:24:29.137462 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0224 13:24:42.731596 1 request.go:655] Throttling request took 1.048245462s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0224 13:24:43.583132 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0224 13:24:59.639376 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0224 13:25:15.233530 1 request.go:655] Throttling request took 1.04826618s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0224 13:25:16.085237 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0224 13:25:30.141401 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0224 13:25:47.735888 1 request.go:655] Throttling request took 1.048429466s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0224 13:25:48.613950 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0224 13:26:00.644889 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0224 13:26:20.264427 1 request.go:655] Throttling request took 1.044827277s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0224 13:26:21.116549 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [f650e17ddac778873d5a3a2750d0031eacb48a2f678353e8644dc3269b17e23d] <==
I0224 13:18:11.429131 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0224 13:18:11.434385 1 shared_informer.go:247] Caches are synced for ReplicationController
I0224 13:18:11.437342 1 shared_informer.go:247] Caches are synced for resource quota
I0224 13:18:11.457395 1 shared_informer.go:247] Caches are synced for disruption
I0224 13:18:11.457423 1 disruption.go:339] Sending events to api server.
I0224 13:18:11.463259 1 shared_informer.go:247] Caches are synced for attach detach
I0224 13:18:11.463349 1 shared_informer.go:247] Caches are synced for resource quota
I0224 13:18:11.463370 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0224 13:18:11.469353 1 shared_informer.go:247] Caches are synced for taint
I0224 13:18:11.469454 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0224 13:18:11.469506 1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-041199. Assuming now as a timestamp.
I0224 13:18:11.469554 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal.
I0224 13:18:11.469638 1 event.go:291] "Event occurred" object="old-k8s-version-041199" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-041199 event: Registered Node old-k8s-version-041199 in Controller"
I0224 13:18:11.469700 1 taint_manager.go:187] Starting NoExecuteTaintManager
E0224 13:18:11.498684 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"52ada4e0-f826-437f-9a8b-7f60703265bd", ResourceVersion:"285", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63875999876, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20250214-acbabc1a\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400076c7c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400076c820)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400076c840), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400076c860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400076c880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400076c8a0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20250214-acbabc1a", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400076c8c0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400076c900)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400140a060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000a96d38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000b06a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000607930)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000a96d80)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I0224 13:18:11.500052 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-fktmx"
I0224 13:18:11.515000 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9947z"
I0224 13:18:11.634078 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0224 13:18:11.902837 1 shared_informer.go:247] Caches are synced for garbage collector
I0224 13:18:11.902859 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0224 13:18:11.934377 1 shared_informer.go:247] Caches are synced for garbage collector
I0224 13:18:13.087525 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0224 13:18:13.115290 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-fktmx"
I0224 13:19:52.314427 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0224 13:19:53.352981 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-4hkkq"
==> kube-proxy [bbc9e43f68288e0d411de2957589dc809f5523f54da276b377e09a0c5e21cfc3] <==
I0224 13:18:12.494275 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0224 13:18:12.494402 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0224 13:18:12.537363 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0224 13:18:12.537448 1 server_others.go:185] Using iptables Proxier.
I0224 13:18:12.537697 1 server.go:650] Version: v1.20.0
I0224 13:18:12.538453 1 config.go:315] Starting service config controller
I0224 13:18:12.538461 1 shared_informer.go:240] Waiting for caches to sync for service config
I0224 13:18:12.538477 1 config.go:224] Starting endpoint slice config controller
I0224 13:18:12.538481 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0224 13:18:12.638555 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0224 13:18:12.638627 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [d5ae265382dbbee8d750583da6801b5973ff7f70431aa59838203058ce844d01] <==
I0224 13:20:39.476517 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0224 13:20:39.476595 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0224 13:20:39.541866 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0224 13:20:39.542188 1 server_others.go:185] Using iptables Proxier.
I0224 13:20:39.542866 1 server.go:650] Version: v1.20.0
I0224 13:20:39.543778 1 config.go:315] Starting service config controller
I0224 13:20:39.543803 1 shared_informer.go:240] Waiting for caches to sync for service config
I0224 13:20:39.544149 1 config.go:224] Starting endpoint slice config controller
I0224 13:20:39.544241 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0224 13:20:39.643933 1 shared_informer.go:247] Caches are synced for service config
I0224 13:20:39.644413 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [46d4401a32810e3afa1b94f8cd27a0f8a5943dda4dc3f6bfaed0837dc0f57c73] <==
W0224 13:17:52.794460 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0224 13:17:52.794733 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0224 13:17:52.794826 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0224 13:17:52.852107 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0224 13:17:52.859271 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0224 13:17:52.859418 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0224 13:17:52.859572 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0224 13:17:52.902230 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0224 13:17:52.905926 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0224 13:17:52.906605 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0224 13:17:52.906780 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0224 13:17:52.942070 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0224 13:17:52.942209 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0224 13:17:52.942312 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0224 13:17:52.942411 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0224 13:17:52.942492 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0224 13:17:52.942595 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0224 13:17:52.942688 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0224 13:17:52.942940 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0224 13:17:53.771128 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0224 13:17:53.857262 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0224 13:17:53.932996 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0224 13:17:53.955572 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0224 13:17:53.979323 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0224 13:17:55.860072 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [e32638610f31c6ca0dc959876d7dfa3d1ef3a3eb6edab79eae946febd75f7bbd] <==
I0224 13:20:29.116126 1 serving.go:331] Generated self-signed cert in-memory
W0224 13:20:35.909885 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0224 13:20:35.910004 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0224 13:20:35.911768 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0224 13:20:35.911842 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0224 13:20:36.177011 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0224 13:20:36.177129 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0224 13:20:36.177137 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0224 13:20:36.177148 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0224 13:20:36.287152 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Feb 24 13:24:40 old-k8s-version-041199 kubelet[667]: E0224 13:24:40.241570 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:24:49 old-k8s-version-041199 kubelet[667]: E0224 13:24:49.241522 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:24:54 old-k8s-version-041199 kubelet[667]: I0224 13:24:54.240860 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95
Feb 24 13:24:54 old-k8s-version-041199 kubelet[667]: E0224 13:24:54.241230 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:00 old-k8s-version-041199 kubelet[667]: E0224 13:25:00.248099 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:25:05 old-k8s-version-041199 kubelet[667]: I0224 13:25:05.246676 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95
Feb 24 13:25:05 old-k8s-version-041199 kubelet[667]: E0224 13:25:05.247533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:15 old-k8s-version-041199 kubelet[667]: E0224 13:25:15.242757 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:25:18 old-k8s-version-041199 kubelet[667]: I0224 13:25:18.240777 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95
Feb 24 13:25:18 old-k8s-version-041199 kubelet[667]: E0224 13:25:18.241668 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:29 old-k8s-version-041199 kubelet[667]: E0224 13:25:29.245663 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: I0224 13:25:32.240779 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95
Feb 24 13:25:32 old-k8s-version-041199 kubelet[667]: E0224 13:25:32.241514 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: I0224 13:25:44.240746 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95
Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.241131 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:44 old-k8s-version-041199 kubelet[667]: E0224 13:25:44.242341 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: I0224 13:25:55.243657 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95
Feb 24 13:25:55 old-k8s-version-041199 kubelet[667]: E0224 13:25:55.244533 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:25:59 old-k8s-version-041199 kubelet[667]: E0224 13:25:59.241703 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:26:07 old-k8s-version-041199 kubelet[667]: I0224 13:26:07.241101 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95
Feb 24 13:26:07 old-k8s-version-041199 kubelet[667]: E0224 13:26:07.249713 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:26:11 old-k8s-version-041199 kubelet[667]: E0224 13:26:11.241765 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 24 13:26:18 old-k8s-version-041199 kubelet[667]: I0224 13:26:18.240793 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d76c925b8ec5326a6a3efe0b8c52fb9d1511c8131ee717268e39539d2509e95
Feb 24 13:26:18 old-k8s-version-041199 kubelet[667]: E0224 13:26:18.241136 667 pod_workers.go:191] Error syncing pod 02e62a3d-c42c-4901-b193-5a7e27816cdd ("dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w8s9j_kubernetes-dashboard(02e62a3d-c42c-4901-b193-5a7e27816cdd)"
Feb 24 13:26:22 old-k8s-version-041199 kubelet[667]: E0224 13:26:22.241739 667 pod_workers.go:191] Error syncing pod 373512a1-6080-4596-9992-af1a830337ab ("metrics-server-9975d5f86-4hkkq_kube-system(373512a1-6080-4596-9992-af1a830337ab)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [061e5eb8df21082d17b31668dc15cb36c1e13f6162d97be1590e44dd95f07419] <==
2025/02/24 13:21:03 Using namespace: kubernetes-dashboard
2025/02/24 13:21:03 Using in-cluster config to connect to apiserver
2025/02/24 13:21:03 Using secret token for csrf signing
2025/02/24 13:21:03 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/02/24 13:21:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/02/24 13:21:03 Successful initial request to the apiserver, version: v1.20.0
2025/02/24 13:21:03 Generating JWE encryption key
2025/02/24 13:21:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/02/24 13:21:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/02/24 13:21:05 Initializing JWE encryption key from synchronized object
2025/02/24 13:21:05 Creating in-cluster Sidecar client
2025/02/24 13:21:05 Serving insecurely on HTTP port: 9090
2025/02/24 13:21:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:21:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:22:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:22:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:23:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:23:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:24:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:24:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:25:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:25:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:26:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/24 13:21:03 Starting overwatch
==> storage-provisioner [3385223105aad4f9eae759a5ea590442a217953f9bdceb1c4cd3660b529f3c9d] <==
I0224 13:21:23.428192 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0224 13:21:23.449283 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0224 13:21:23.449336 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0224 13:21:40.926481 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0224 13:21:40.926898 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-041199_9e50ed10-9f39-4f5b-b5fb-50bad3e86885!
I0224 13:21:40.928161 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7f0cb064-abbd-47ff-819b-466509fe426a", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-041199_9e50ed10-9f39-4f5b-b5fb-50bad3e86885 became leader
I0224 13:21:41.028764 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-041199_9e50ed10-9f39-4f5b-b5fb-50bad3e86885!
==> storage-provisioner [6cd982738bc8cda1f0b62e6554ba43407b6ce0389aabf4dece8688106ea1f992] <==
I0224 13:20:39.224161 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0224 13:21:09.229786 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-041199 -n old-k8s-version-041199
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-041199 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-4hkkq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-041199 describe pod metrics-server-9975d5f86-4hkkq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-041199 describe pod metrics-server-9975d5f86-4hkkq: exit status 1 (107.554173ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-4hkkq" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-041199 describe pod metrics-server-9975d5f86-4hkkq: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (382.60s)