=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-908523 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E0319 19:06:17.326837 306093 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/functional-690672/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:06:36.832781 306093 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/addons-991377/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:08:14.258320 306093 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/functional-690672/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-908523 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.658986502s)
-- stdout --
* [old-k8s-version-908523] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20544
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20544-300569/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-300569/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-908523" primary control-plane node in "old-k8s-version-908523" cluster
* Pulling base image v0.0.46-1741860993-20523 ...
* Restarting existing docker container for "old-k8s-version-908523" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
* Verifying Kubernetes components...
- Using image registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-908523 addons enable metrics-server
* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
-- /stdout --
** stderr **
I0319 19:05:54.925130 521487 out.go:345] Setting OutFile to fd 1 ...
I0319 19:05:54.925263 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 19:05:54.925275 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:05:54.925281 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 19:05:54.925555 521487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-300569/.minikube/bin
I0319 19:05:54.925955 521487 out.go:352] Setting JSON to false
I0319 19:05:54.927046 521487 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10089,"bootTime":1742401066,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0319 19:05:54.927122 521487 start.go:139] virtualization:
I0319 19:05:54.931916 521487 out.go:177] * [old-k8s-version-908523] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0319 19:05:54.935048 521487 out.go:177] - MINIKUBE_LOCATION=20544
I0319 19:05:54.935094 521487 notify.go:220] Checking for updates...
I0319 19:05:54.938430 521487 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0319 19:05:54.941783 521487 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20544-300569/kubeconfig
I0319 19:05:54.944752 521487 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-300569/.minikube
I0319 19:05:54.947674 521487 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0319 19:05:54.950561 521487 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0319 19:05:54.953821 521487 config.go:182] Loaded profile config "old-k8s-version-908523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0319 19:05:54.957317 521487 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0319 19:05:54.960101 521487 driver.go:394] Setting default libvirt URI to qemu:///system
I0319 19:05:54.990275 521487 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
I0319 19:05:54.990444 521487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0319 19:05:55.048606 521487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-19 19:05:55.039217788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0319 19:05:55.048717 521487 docker.go:318] overlay module found
I0319 19:05:55.051786 521487 out.go:177] * Using the docker driver based on existing profile
I0319 19:05:55.054704 521487 start.go:297] selected driver: docker
I0319 19:05:55.054722 521487 start.go:901] validating driver "docker" against &{Name:old-k8s-version-908523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908523 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0319 19:05:55.054816 521487 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0319 19:05:55.055579 521487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0319 19:05:55.116082 521487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-19 19:05:55.100787754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0319 19:05:55.116423 521487 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0319 19:05:55.116458 521487 cni.go:84] Creating CNI manager for ""
I0319 19:05:55.116525 521487 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0319 19:05:55.116610 521487 start.go:340] cluster config:
{Name:old-k8s-version-908523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0319 19:05:55.119636 521487 out.go:177] * Starting "old-k8s-version-908523" primary control-plane node in "old-k8s-version-908523" cluster
I0319 19:05:55.122454 521487 cache.go:121] Beginning downloading kic base image for docker with containerd
I0319 19:05:55.125462 521487 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0319 19:05:55.128318 521487 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0319 19:05:55.128385 521487 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20544-300569/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0319 19:05:55.128398 521487 cache.go:56] Caching tarball of preloaded images
I0319 19:05:55.128395 521487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0319 19:05:55.128479 521487 preload.go:172] Found /home/jenkins/minikube-integration/20544-300569/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0319 19:05:55.128489 521487 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0319 19:05:55.128683 521487 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/config.json ...
I0319 19:05:55.148765 521487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0319 19:05:55.148788 521487 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0319 19:05:55.148802 521487 cache.go:230] Successfully downloaded all kic artifacts
I0319 19:05:55.148826 521487 start.go:360] acquireMachinesLock for old-k8s-version-908523: {Name:mkdce47576a354744b1a8fb3a4c4ccfccb836506 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0319 19:05:55.148894 521487 start.go:364] duration metric: took 41.018µs to acquireMachinesLock for "old-k8s-version-908523"
I0319 19:05:55.148929 521487 start.go:96] Skipping create...Using existing machine configuration
I0319 19:05:55.148939 521487 fix.go:54] fixHost starting:
I0319 19:05:55.149228 521487 cli_runner.go:164] Run: docker container inspect old-k8s-version-908523 --format={{.State.Status}}
I0319 19:05:55.166135 521487 fix.go:112] recreateIfNeeded on old-k8s-version-908523: state=Stopped err=<nil>
W0319 19:05:55.166165 521487 fix.go:138] unexpected machine state, will restart: <nil>
I0319 19:05:55.169461 521487 out.go:177] * Restarting existing docker container for "old-k8s-version-908523" ...
I0319 19:05:55.172481 521487 cli_runner.go:164] Run: docker start old-k8s-version-908523
I0319 19:05:55.447367 521487 cli_runner.go:164] Run: docker container inspect old-k8s-version-908523 --format={{.State.Status}}
I0319 19:05:55.466931 521487 kic.go:430] container "old-k8s-version-908523" state is running.
I0319 19:05:55.467345 521487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-908523
I0319 19:05:55.489249 521487 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/config.json ...
I0319 19:05:55.489485 521487 machine.go:93] provisionDockerMachine start ...
I0319 19:05:55.489553 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:05:55.522135 521487 main.go:141] libmachine: Using SSH client type: native
I0319 19:05:55.522461 521487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33438 <nil> <nil>}
I0319 19:05:55.522478 521487 main.go:141] libmachine: About to run SSH command:
hostname
I0319 19:05:55.523262 521487 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0319 19:05:58.647793 521487 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-908523
I0319 19:05:58.647816 521487 ubuntu.go:169] provisioning hostname "old-k8s-version-908523"
I0319 19:05:58.647907 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:05:58.665284 521487 main.go:141] libmachine: Using SSH client type: native
I0319 19:05:58.665597 521487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33438 <nil> <nil>}
I0319 19:05:58.665615 521487 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-908523 && echo "old-k8s-version-908523" | sudo tee /etc/hostname
I0319 19:05:58.800997 521487 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-908523
I0319 19:05:58.801078 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:05:58.819186 521487 main.go:141] libmachine: Using SSH client type: native
I0319 19:05:58.819520 521487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33438 <nil> <nil>}
I0319 19:05:58.819543 521487 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-908523' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-908523/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-908523' | sudo tee -a /etc/hosts;
fi
fi
I0319 19:05:58.944445 521487 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0319 19:05:58.944470 521487 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20544-300569/.minikube CaCertPath:/home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20544-300569/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20544-300569/.minikube}
I0319 19:05:58.944501 521487 ubuntu.go:177] setting up certificates
I0319 19:05:58.944512 521487 provision.go:84] configureAuth start
I0319 19:05:58.944609 521487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-908523
I0319 19:05:58.962197 521487 provision.go:143] copyHostCerts
I0319 19:05:58.962271 521487 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-300569/.minikube/cert.pem, removing ...
I0319 19:05:58.962285 521487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-300569/.minikube/cert.pem
I0319 19:05:58.962365 521487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20544-300569/.minikube/cert.pem (1123 bytes)
I0319 19:05:58.962483 521487 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-300569/.minikube/key.pem, removing ...
I0319 19:05:58.962495 521487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-300569/.minikube/key.pem
I0319 19:05:58.962524 521487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20544-300569/.minikube/key.pem (1679 bytes)
I0319 19:05:58.962596 521487 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-300569/.minikube/ca.pem, removing ...
I0319 19:05:58.962605 521487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-300569/.minikube/ca.pem
I0319 19:05:58.962636 521487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20544-300569/.minikube/ca.pem (1078 bytes)
I0319 19:05:58.962698 521487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20544-300569/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-908523 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-908523]
I0319 19:05:59.578258 521487 provision.go:177] copyRemoteCerts
I0319 19:05:59.578333 521487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0319 19:05:59.578377 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:05:59.595343 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:05:59.685142 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0319 19:05:59.708823 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0319 19:05:59.734067 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0319 19:05:59.758339 521487 provision.go:87] duration metric: took 813.808566ms to configureAuth
I0319 19:05:59.758366 521487 ubuntu.go:193] setting minikube options for container-runtime
I0319 19:05:59.758598 521487 config.go:182] Loaded profile config "old-k8s-version-908523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0319 19:05:59.758611 521487 machine.go:96] duration metric: took 4.269109514s to provisionDockerMachine
I0319 19:05:59.758620 521487 start.go:293] postStartSetup for "old-k8s-version-908523" (driver="docker")
I0319 19:05:59.758632 521487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0319 19:05:59.758689 521487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0319 19:05:59.758741 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:05:59.780513 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:05:59.869555 521487 ssh_runner.go:195] Run: cat /etc/os-release
I0319 19:05:59.872474 521487 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0319 19:05:59.872508 521487 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0319 19:05:59.872519 521487 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0319 19:05:59.872527 521487 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0319 19:05:59.872536 521487 filesync.go:126] Scanning /home/jenkins/minikube-integration/20544-300569/.minikube/addons for local assets ...
I0319 19:05:59.872622 521487 filesync.go:126] Scanning /home/jenkins/minikube-integration/20544-300569/.minikube/files for local assets ...
I0319 19:05:59.872712 521487 filesync.go:149] local asset: /home/jenkins/minikube-integration/20544-300569/.minikube/files/etc/ssl/certs/3060932.pem -> 3060932.pem in /etc/ssl/certs
I0319 19:05:59.872830 521487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0319 19:05:59.881421 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/files/etc/ssl/certs/3060932.pem --> /etc/ssl/certs/3060932.pem (1708 bytes)
I0319 19:05:59.905268 521487 start.go:296] duration metric: took 146.628579ms for postStartSetup
I0319 19:05:59.905348 521487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0319 19:05:59.905390 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:05:59.922490 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:06:00.010001 521487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0319 19:06:00.015054 521487 fix.go:56] duration metric: took 4.866107381s for fixHost
I0319 19:06:00.015136 521487 start.go:83] releasing machines lock for "old-k8s-version-908523", held for 4.866225339s
I0319 19:06:00.015225 521487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-908523
I0319 19:06:00.033925 521487 ssh_runner.go:195] Run: cat /version.json
I0319 19:06:00.033985 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:06:00.034113 521487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0319 19:06:00.034213 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:06:00.053982 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:06:00.062538 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:06:00.274856 521487 ssh_runner.go:195] Run: systemctl --version
I0319 19:06:00.279228 521487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0319 19:06:00.283578 521487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0319 19:06:00.301956 521487 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0319 19:06:00.302045 521487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0319 19:06:00.311183 521487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0319 19:06:00.311259 521487 start.go:495] detecting cgroup driver to use...
I0319 19:06:00.311329 521487 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0319 19:06:00.311388 521487 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0319 19:06:00.325713 521487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0319 19:06:00.337892 521487 docker.go:217] disabling cri-docker service (if available) ...
I0319 19:06:00.338008 521487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0319 19:06:00.350934 521487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0319 19:06:00.362919 521487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0319 19:06:00.449051 521487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0319 19:06:00.538503 521487 docker.go:233] disabling docker service ...
I0319 19:06:00.538597 521487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0319 19:06:00.551878 521487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0319 19:06:00.563850 521487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0319 19:06:00.653896 521487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0319 19:06:00.732754 521487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0319 19:06:00.744121 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0319 19:06:00.760978 521487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0319 19:06:00.771073 521487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0319 19:06:00.781282 521487 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0319 19:06:00.781401 521487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0319 19:06:00.790929 521487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0319 19:06:00.800769 521487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0319 19:06:00.810759 521487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0319 19:06:00.821749 521487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0319 19:06:00.830421 521487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0319 19:06:00.839944 521487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0319 19:06:00.848541 521487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0319 19:06:00.857096 521487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0319 19:06:00.937805 521487 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0319 19:06:01.103771 521487 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0319 19:06:01.103897 521487 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0319 19:06:01.112399 521487 start.go:563] Will wait 60s for crictl version
I0319 19:06:01.112518 521487 ssh_runner.go:195] Run: which crictl
I0319 19:06:01.122046 521487 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0319 19:06:01.161398 521487 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0319 19:06:01.161531 521487 ssh_runner.go:195] Run: containerd --version
I0319 19:06:01.188791 521487 ssh_runner.go:195] Run: containerd --version
I0319 19:06:01.222432 521487 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
I0319 19:06:01.226260 521487 cli_runner.go:164] Run: docker network inspect old-k8s-version-908523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0319 19:06:01.246746 521487 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0319 19:06:01.250414 521487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0319 19:06:01.265059 521487 kubeadm.go:883] updating cluster {Name:old-k8s-version-908523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908523 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0319 19:06:01.265172 521487 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0319 19:06:01.265231 521487 ssh_runner.go:195] Run: sudo crictl images --output json
I0319 19:06:01.307182 521487 containerd.go:627] all images are preloaded for containerd runtime.
I0319 19:06:01.307207 521487 containerd.go:534] Images already preloaded, skipping extraction
I0319 19:06:01.307270 521487 ssh_runner.go:195] Run: sudo crictl images --output json
I0319 19:06:01.346793 521487 containerd.go:627] all images are preloaded for containerd runtime.
I0319 19:06:01.346820 521487 cache_images.go:84] Images are preloaded, skipping loading
I0319 19:06:01.346829 521487 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
I0319 19:06:01.346996 521487 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-908523 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0319 19:06:01.347072 521487 ssh_runner.go:195] Run: sudo crictl info
I0319 19:06:01.386701 521487 cni.go:84] Creating CNI manager for ""
I0319 19:06:01.386732 521487 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0319 19:06:01.386768 521487 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0319 19:06:01.386796 521487 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-908523 NodeName:old-k8s-version-908523 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0319 19:06:01.386942 521487 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-908523"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0319 19:06:01.387023 521487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0319 19:06:01.396699 521487 binaries.go:44] Found k8s binaries, skipping transfer
I0319 19:06:01.396787 521487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0319 19:06:01.406408 521487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0319 19:06:01.426822 521487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0319 19:06:01.446357 521487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0319 19:06:01.466191 521487 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0319 19:06:01.469957 521487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0319 19:06:01.481604 521487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0319 19:06:01.568412 521487 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0319 19:06:01.583010 521487 certs.go:68] Setting up /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523 for IP: 192.168.85.2
I0319 19:06:01.583033 521487 certs.go:194] generating shared ca certs ...
I0319 19:06:01.583049 521487 certs.go:226] acquiring lock for ca certs: {Name:mka72ef37d967cad7bd9325c6ba9f8fdcb24c066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0319 19:06:01.583245 521487 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20544-300569/.minikube/ca.key
I0319 19:06:01.583332 521487 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20544-300569/.minikube/proxy-client-ca.key
I0319 19:06:01.583361 521487 certs.go:256] generating profile certs ...
I0319 19:06:01.583479 521487 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/client.key
I0319 19:06:01.583572 521487 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/apiserver.key.826d1336
I0319 19:06:01.583657 521487 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/proxy-client.key
I0319 19:06:01.583798 521487 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/306093.pem (1338 bytes)
W0319 19:06:01.583873 521487 certs.go:480] ignoring /home/jenkins/minikube-integration/20544-300569/.minikube/certs/306093_empty.pem, impossibly tiny 0 bytes
I0319 19:06:01.583889 521487 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca-key.pem (1675 bytes)
I0319 19:06:01.583935 521487 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem (1078 bytes)
I0319 19:06:01.583984 521487 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/cert.pem (1123 bytes)
I0319 19:06:01.584026 521487 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/key.pem (1679 bytes)
I0319 19:06:01.584114 521487 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/files/etc/ssl/certs/3060932.pem (1708 bytes)
I0319 19:06:01.584831 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0319 19:06:01.618716 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0319 19:06:01.649418 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0319 19:06:01.680371 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0319 19:06:01.710713 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0319 19:06:01.744611 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0319 19:06:01.778554 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0319 19:06:01.807662 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/old-k8s-version-908523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0319 19:06:01.836962 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0319 19:06:01.865921 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/certs/306093.pem --> /usr/share/ca-certificates/306093.pem (1338 bytes)
I0319 19:06:01.893498 521487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/files/etc/ssl/certs/3060932.pem --> /usr/share/ca-certificates/3060932.pem (1708 bytes)
I0319 19:06:01.920328 521487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0319 19:06:01.941245 521487 ssh_runner.go:195] Run: openssl version
I0319 19:06:01.947341 521487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3060932.pem && ln -fs /usr/share/ca-certificates/3060932.pem /etc/ssl/certs/3060932.pem"
I0319 19:06:01.958616 521487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3060932.pem
I0319 19:06:01.962309 521487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 18:26 /usr/share/ca-certificates/3060932.pem
I0319 19:06:01.962379 521487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3060932.pem
I0319 19:06:01.970725 521487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3060932.pem /etc/ssl/certs/3ec20f2e.0"
I0319 19:06:01.980787 521487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0319 19:06:01.991611 521487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0319 19:06:01.995383 521487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 18:18 /usr/share/ca-certificates/minikubeCA.pem
I0319 19:06:01.995531 521487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0319 19:06:02.003641 521487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0319 19:06:02.013639 521487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/306093.pem && ln -fs /usr/share/ca-certificates/306093.pem /etc/ssl/certs/306093.pem"
I0319 19:06:02.023906 521487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/306093.pem
I0319 19:06:02.028322 521487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 18:26 /usr/share/ca-certificates/306093.pem
I0319 19:06:02.028419 521487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/306093.pem
I0319 19:06:02.038063 521487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/306093.pem /etc/ssl/certs/51391683.0"
I0319 19:06:02.048157 521487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0319 19:06:02.052360 521487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0319 19:06:02.059846 521487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0319 19:06:02.067372 521487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0319 19:06:02.080833 521487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0319 19:06:02.093475 521487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0319 19:06:02.102201 521487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0319 19:06:02.110265 521487 kubeadm.go:392] StartCluster: {Name:old-k8s-version-908523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908523 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0319 19:06:02.110397 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0319 19:06:02.110507 521487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0319 19:06:02.166158 521487 cri.go:89] found id: "45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:06:02.166190 521487 cri.go:89] found id: "ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:06:02.166196 521487 cri.go:89] found id: "f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:06:02.166200 521487 cri.go:89] found id: "3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:06:02.166203 521487 cri.go:89] found id: "49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:06:02.166208 521487 cri.go:89] found id: "590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:06:02.166211 521487 cri.go:89] found id: "b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:06:02.166214 521487 cri.go:89] found id: "df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:06:02.166217 521487 cri.go:89] found id: ""
I0319 19:06:02.166271 521487 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0319 19:06:02.179696 521487 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-03-19T19:06:02Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0319 19:06:02.179798 521487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0319 19:06:02.190065 521487 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0319 19:06:02.190086 521487 kubeadm.go:593] restartPrimaryControlPlane start ...
I0319 19:06:02.190142 521487 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0319 19:06:02.199701 521487 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0319 19:06:02.200392 521487 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-908523" does not appear in /home/jenkins/minikube-integration/20544-300569/kubeconfig
I0319 19:06:02.200711 521487 kubeconfig.go:62] /home/jenkins/minikube-integration/20544-300569/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-908523" cluster setting kubeconfig missing "old-k8s-version-908523" context setting]
I0319 19:06:02.201190 521487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-300569/kubeconfig: {Name:mkacba6ab67fe1ca8a3d03569f0055410489e147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0319 19:06:02.202639 521487 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0319 19:06:02.213621 521487 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0319 19:06:02.213658 521487 kubeadm.go:597] duration metric: took 23.565201ms to restartPrimaryControlPlane
I0319 19:06:02.213669 521487 kubeadm.go:394] duration metric: took 103.420577ms to StartCluster
I0319 19:06:02.213692 521487 settings.go:142] acquiring lock: {Name:mk92e2d35bdbbf8cdf17aa5c8f2d12a5eb6dbf61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0319 19:06:02.213781 521487 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20544-300569/kubeconfig
I0319 19:06:02.214899 521487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-300569/kubeconfig: {Name:mkacba6ab67fe1ca8a3d03569f0055410489e147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0319 19:06:02.215226 521487 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0319 19:06:02.215792 521487 config.go:182] Loaded profile config "old-k8s-version-908523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0319 19:06:02.215871 521487 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0319 19:06:02.215976 521487 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-908523"
I0319 19:06:02.215999 521487 addons.go:69] Setting dashboard=true in profile "old-k8s-version-908523"
I0319 19:06:02.216015 521487 addons.go:238] Setting addon dashboard=true in "old-k8s-version-908523"
W0319 19:06:02.216022 521487 addons.go:247] addon dashboard should already be in state true
I0319 19:06:02.216048 521487 host.go:66] Checking if "old-k8s-version-908523" exists ...
I0319 19:06:02.216053 521487 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-908523"
I0319 19:06:02.216434 521487 cli_runner.go:164] Run: docker container inspect old-k8s-version-908523 --format={{.State.Status}}
I0319 19:06:02.217138 521487 cli_runner.go:164] Run: docker container inspect old-k8s-version-908523 --format={{.State.Status}}
I0319 19:06:02.215978 521487 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-908523"
I0319 19:06:02.217332 521487 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-908523"
W0319 19:06:02.217348 521487 addons.go:247] addon storage-provisioner should already be in state true
I0319 19:06:02.217406 521487 host.go:66] Checking if "old-k8s-version-908523" exists ...
I0319 19:06:02.217928 521487 cli_runner.go:164] Run: docker container inspect old-k8s-version-908523 --format={{.State.Status}}
I0319 19:06:02.215987 521487 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-908523"
I0319 19:06:02.218527 521487 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-908523"
W0319 19:06:02.218545 521487 addons.go:247] addon metrics-server should already be in state true
I0319 19:06:02.218571 521487 host.go:66] Checking if "old-k8s-version-908523" exists ...
I0319 19:06:02.218998 521487 cli_runner.go:164] Run: docker container inspect old-k8s-version-908523 --format={{.State.Status}}
I0319 19:06:02.220946 521487 out.go:177] * Verifying Kubernetes components...
I0319 19:06:02.226797 521487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0319 19:06:02.274660 521487 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0319 19:06:02.282077 521487 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-908523"
W0319 19:06:02.282117 521487 addons.go:247] addon default-storageclass should already be in state true
I0319 19:06:02.282158 521487 host.go:66] Checking if "old-k8s-version-908523" exists ...
I0319 19:06:02.282764 521487 cli_runner.go:164] Run: docker container inspect old-k8s-version-908523 --format={{.State.Status}}
I0319 19:06:02.289766 521487 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0319 19:06:02.292845 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0319 19:06:02.292872 521487 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0319 19:06:02.292958 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:06:02.322219 521487 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0319 19:06:02.325154 521487 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0319 19:06:02.325378 521487 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0319 19:06:02.326906 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:06:02.337623 521487 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0319 19:06:02.344717 521487 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0319 19:06:02.344743 521487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0319 19:06:02.344809 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:06:02.378814 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:06:02.381654 521487 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0319 19:06:02.381681 521487 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0319 19:06:02.381761 521487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908523
I0319 19:06:02.395877 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:06:02.414536 521487 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0319 19:06:02.428709 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:06:02.460243 521487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/old-k8s-version-908523/id_rsa Username:docker}
I0319 19:06:02.468777 521487 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-908523" to be "Ready" ...
I0319 19:06:02.529902 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0319 19:06:02.529987 521487 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0319 19:06:02.544968 521487 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0319 19:06:02.545057 521487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0319 19:06:02.552470 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0319 19:06:02.552634 521487 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0319 19:06:02.583354 521487 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0319 19:06:02.583453 521487 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0319 19:06:02.587274 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0319 19:06:02.592592 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0319 19:06:02.592666 521487 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0319 19:06:02.610857 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0319 19:06:02.639854 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0319 19:06:02.639952 521487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0319 19:06:02.643991 521487 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0319 19:06:02.644065 521487 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0319 19:06:02.671350 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0319 19:06:02.671424 521487 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0319 19:06:02.686036 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0319 19:06:02.724241 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0319 19:06:02.724337 521487 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0319 19:06:02.817258 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0319 19:06:02.817345 521487 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
W0319 19:06:02.821059 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:02.821136 521487 retry.go:31] will retry after 364.711837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0319 19:06:02.860812 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:02.860850 521487 retry.go:31] will retry after 199.890608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0319 19:06:02.860986 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:02.861002 521487 retry.go:31] will retry after 215.087972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:02.865635 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0319 19:06:02.865660 521487 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0319 19:06:02.883889 521487 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0319 19:06:02.883913 521487 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0319 19:06:02.902960 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0319 19:06:02.978514 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:02.978548 521487 retry.go:31] will retry after 357.405772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.061735 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0319 19:06:03.077161 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0319 19:06:03.160116 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.160151 521487 retry.go:31] will retry after 483.046846ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0319 19:06:03.182737 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.182769 521487 retry.go:31] will retry after 318.824107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.186892 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0319 19:06:03.270727 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.270757 521487 retry.go:31] will retry after 208.676048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.336915 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0319 19:06:03.407572 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.407605 521487 retry.go:31] will retry after 550.071531ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.480356 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0319 19:06:03.502680 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0319 19:06:03.579012 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.579061 521487 retry.go:31] will retry after 744.269309ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0319 19:06:03.606135 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.606169 521487 retry.go:31] will retry after 703.041088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.643406 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0319 19:06:03.714775 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.714812 521487 retry.go:31] will retry after 497.370047ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:03.958167 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0319 19:06:04.033048 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.033084 521487 retry.go:31] will retry after 347.539794ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.212405 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0319 19:06:04.294111 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.294145 521487 retry.go:31] will retry after 534.555922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.310262 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0319 19:06:04.323573 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0319 19:06:04.380937 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0319 19:06:04.402507 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.402540 521487 retry.go:31] will retry after 705.863998ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0319 19:06:04.448675 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.448716 521487 retry.go:31] will retry after 1.234206744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.469309 521487 node_ready.go:53] error getting node "old-k8s-version-908523": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-908523": dial tcp 192.168.85.2:8443: connect: connection refused
W0319 19:06:04.487878 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.487951 521487 retry.go:31] will retry after 773.463167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.829444 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0319 19:06:04.898148 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:04.898180 521487 retry.go:31] will retry after 1.213722519s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:05.109288 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0319 19:06:05.186794 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:05.186827 521487 retry.go:31] will retry after 1.383400484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:05.261989 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0319 19:06:05.333465 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:05.333500 521487 retry.go:31] will retry after 1.727086561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:05.683461 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0319 19:06:05.755130 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:05.755163 521487 retry.go:31] will retry after 1.883658495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:06.112934 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0319 19:06:06.184013 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:06.184044 521487 retry.go:31] will retry after 1.533011347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:06.469761 521487 node_ready.go:53] error getting node "old-k8s-version-908523": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-908523": dial tcp 192.168.85.2:8443: connect: connection refused
I0319 19:06:06.571057 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0319 19:06:06.649440 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:06.649474 521487 retry.go:31] will retry after 2.439782106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:07.061441 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0319 19:06:07.139076 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:07.139110 521487 retry.go:31] will retry after 2.75261209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:07.639051 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0319 19:06:07.710769 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:07.710846 521487 retry.go:31] will retry after 1.980116534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:07.718017 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0319 19:06:07.808836 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:07.808869 521487 retry.go:31] will retry after 3.497894655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:08.469892 521487 node_ready.go:53] error getting node "old-k8s-version-908523": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-908523": dial tcp 192.168.85.2:8443: connect: connection refused
I0319 19:06:09.090150 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0319 19:06:09.179010 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:09.179046 521487 retry.go:31] will retry after 2.165218349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:09.691139 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0319 19:06:09.769043 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:09.769076 521487 retry.go:31] will retry after 3.644778681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:09.892392 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0319 19:06:09.968408 521487 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:09.968442 521487 retry.go:31] will retry after 2.622168273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0319 19:06:10.969280 521487 node_ready.go:53] error getting node "old-k8s-version-908523": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-908523": dial tcp 192.168.85.2:8443: connect: connection refused
I0319 19:06:11.307584 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0319 19:06:11.345088 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0319 19:06:12.591654 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0319 19:06:13.414785 521487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0319 19:06:19.013961 521487 node_ready.go:49] node "old-k8s-version-908523" has status "Ready":"True"
I0319 19:06:19.013986 521487 node_ready.go:38] duration metric: took 16.545178077s for node "old-k8s-version-908523" to be "Ready" ...
I0319 19:06:19.013996 521487 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0319 19:06:19.141422 521487 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-xmp7g" in "kube-system" namespace to be "Ready" ...
I0319 19:06:19.231890 521487 pod_ready.go:93] pod "coredns-74ff55c5b-xmp7g" in "kube-system" namespace has status "Ready":"True"
I0319 19:06:19.231917 521487 pod_ready.go:82] duration metric: took 90.46373ms for pod "coredns-74ff55c5b-xmp7g" in "kube-system" namespace to be "Ready" ...
I0319 19:06:19.231929 521487 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-908523" in "kube-system" namespace to be "Ready" ...
I0319 19:06:19.247508 521487 pod_ready.go:93] pod "etcd-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"True"
I0319 19:06:19.247535 521487 pod_ready.go:82] duration metric: took 15.597427ms for pod "etcd-old-k8s-version-908523" in "kube-system" namespace to be "Ready" ...
I0319 19:06:19.247550 521487 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-908523" in "kube-system" namespace to be "Ready" ...
I0319 19:06:19.278578 521487 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"True"
I0319 19:06:19.278606 521487 pod_ready.go:82] duration metric: took 31.047677ms for pod "kube-apiserver-old-k8s-version-908523" in "kube-system" namespace to be "Ready" ...
I0319 19:06:19.278619 521487 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace to be "Ready" ...
I0319 19:06:19.932704 521487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.625084099s)
I0319 19:06:20.348358 521487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.00322905s)
I0319 19:06:20.348440 521487 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-908523"
I0319 19:06:20.379906 521487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.788200687s)
I0319 19:06:20.380087 521487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.965270744s)
I0319 19:06:20.382839 521487 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-908523 addons enable metrics-server
I0319 19:06:20.385943 521487 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I0319 19:06:20.388910 521487 addons.go:514] duration metric: took 18.173038978s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I0319 19:06:21.283642 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:23.291704 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:25.789022 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:28.283501 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:30.285332 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:32.784957 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:35.284102 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:37.284293 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:39.285160 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:41.785183 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:44.285347 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:46.788935 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:49.284188 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:51.284332 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:53.783441 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:55.784062 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:06:58.284244 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:00.285055 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:02.783639 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:04.785137 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:07.284410 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:09.785243 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:12.284603 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:14.785054 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:17.284333 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:19.290597 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:21.783975 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:24.284758 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:26.784254 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:28.784988 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:31.284586 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:33.785368 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:36.284389 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:38.284674 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:40.783956 521487 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:42.284788 521487 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"True"
I0319 19:07:42.284813 521487 pod_ready.go:82] duration metric: took 1m23.006186458s for pod "kube-controller-manager-old-k8s-version-908523" in "kube-system" namespace to be "Ready" ...
I0319 19:07:42.284824 521487 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-scv6d" in "kube-system" namespace to be "Ready" ...
I0319 19:07:42.289411 521487 pod_ready.go:93] pod "kube-proxy-scv6d" in "kube-system" namespace has status "Ready":"True"
I0319 19:07:42.289433 521487 pod_ready.go:82] duration metric: took 4.601564ms for pod "kube-proxy-scv6d" in "kube-system" namespace to be "Ready" ...
I0319 19:07:42.289444 521487 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-908523" in "kube-system" namespace to be "Ready" ...
I0319 19:07:44.294249 521487 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:46.295100 521487 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:48.295602 521487 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:49.295291 521487 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-908523" in "kube-system" namespace has status "Ready":"True"
I0319 19:07:49.295316 521487 pod_ready.go:82] duration metric: took 7.005863981s for pod "kube-scheduler-old-k8s-version-908523" in "kube-system" namespace to be "Ready" ...
I0319 19:07:49.295327 521487 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace to be "Ready" ...
I0319 19:07:51.301826 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:53.800624 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:55.800768 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:07:58.301307 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:00.301696 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:02.800910 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:04.801784 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:07.301238 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:09.800896 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:11.801177 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:13.802573 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:16.300794 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:18.801501 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:21.301709 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:23.800326 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:25.800736 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:28.301916 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:30.800486 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:33.300431 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:35.302075 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:37.801175 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:40.301570 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:42.801048 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:44.801511 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:47.299580 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:49.301013 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:51.301714 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:53.800595 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:55.800723 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:08:57.801841 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:00.300454 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:02.301619 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:04.800224 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:06.801093 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:08.801223 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:10.801514 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:13.301079 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:15.801239 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:18.302336 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:20.800740 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:22.801238 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:24.801349 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:27.301021 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:29.301448 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:31.801417 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:34.301703 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:36.800408 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:39.300505 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:41.301339 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:43.303331 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:45.304861 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:47.801441 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:49.802226 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:52.300230 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:54.303047 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:56.801427 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:09:58.801466 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:01.304130 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:03.800609 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:05.801532 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:08.301005 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:10.395342 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:12.808770 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:15.302127 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:17.800993 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:19.801939 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:21.805992 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:24.300816 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:26.301398 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:28.301545 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:30.801774 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:32.802157 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:34.803293 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:37.301702 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:39.801027 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:42.301721 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:44.302143 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:46.801447 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:49.300526 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:51.307320 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:53.802183 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:56.299877 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:10:58.801556 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:01.301003 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:03.301328 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:05.799754 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:07.806169 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:10.301071 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:12.801236 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:14.801397 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:17.300495 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:19.303203 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:21.800212 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:23.800605 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:25.808799 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:28.301083 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:30.305144 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:32.801244 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:34.804777 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:37.301234 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:39.301343 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:41.800853 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:44.302273 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:46.801371 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:49.296298 521487 pod_ready.go:82] duration metric: took 4m0.00092604s for pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace to be "Ready" ...
E0319 19:11:49.296327 521487 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0319 19:11:49.296338 521487 pod_ready.go:39] duration metric: took 5m30.282329345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0319 19:11:49.296356 521487 api_server.go:52] waiting for apiserver process to appear ...
I0319 19:11:49.296404 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0319 19:11:49.296474 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0319 19:11:49.359231 521487 cri.go:89] found id: "4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b"
I0319 19:11:49.359249 521487 cri.go:89] found id: "b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:11:49.359254 521487 cri.go:89] found id: ""
I0319 19:11:49.359262 521487 logs.go:282] 2 containers: [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0]
I0319 19:11:49.359322 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.363487 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.367117 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0319 19:11:49.367188 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0319 19:11:49.432706 521487 cri.go:89] found id: "9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432"
I0319 19:11:49.432726 521487 cri.go:89] found id: "590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:11:49.432730 521487 cri.go:89] found id: ""
I0319 19:11:49.432738 521487 logs.go:282] 2 containers: [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432 590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe]
I0319 19:11:49.432795 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.436691 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.441989 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0319 19:11:49.442059 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0319 19:11:49.512212 521487 cri.go:89] found id: "bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9"
I0319 19:11:49.512229 521487 cri.go:89] found id: "45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:11:49.512234 521487 cri.go:89] found id: ""
I0319 19:11:49.512241 521487 logs.go:282] 2 containers: [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9 45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a]
I0319 19:11:49.512297 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.516172 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.519834 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0319 19:11:49.519912 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0319 19:11:49.569434 521487 cri.go:89] found id: "c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0"
I0319 19:11:49.569507 521487 cri.go:89] found id: "49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:11:49.569530 521487 cri.go:89] found id: ""
I0319 19:11:49.569551 521487 logs.go:282] 2 containers: [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0 49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4]
I0319 19:11:49.569649 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.573889 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.578263 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0319 19:11:49.578387 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0319 19:11:49.631283 521487 cri.go:89] found id: "5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587"
I0319 19:11:49.631344 521487 cri.go:89] found id: "3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:11:49.631372 521487 cri.go:89] found id: ""
I0319 19:11:49.631393 521487 logs.go:282] 2 containers: [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587 3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c]
I0319 19:11:49.631479 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.635503 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.639502 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0319 19:11:49.639633 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0319 19:11:49.693635 521487 cri.go:89] found id: "06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2"
I0319 19:11:49.693705 521487 cri.go:89] found id: "df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:11:49.693739 521487 cri.go:89] found id: ""
I0319 19:11:49.693766 521487 logs.go:282] 2 containers: [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2 df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc]
I0319 19:11:49.693854 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.698373 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.702396 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0319 19:11:49.702473 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0319 19:11:49.753286 521487 cri.go:89] found id: "c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934"
I0319 19:11:49.753310 521487 cri.go:89] found id: "ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:11:49.753316 521487 cri.go:89] found id: ""
I0319 19:11:49.753323 521487 logs.go:282] 2 containers: [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934 ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657]
I0319 19:11:49.753381 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.757668 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.761347 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0319 19:11:49.761429 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0319 19:11:49.810357 521487 cri.go:89] found id: "8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc"
I0319 19:11:49.810379 521487 cri.go:89] found id: "f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:11:49.810384 521487 cri.go:89] found id: ""
I0319 19:11:49.810392 521487 logs.go:282] 2 containers: [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec]
I0319 19:11:49.810449 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.814612 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.818351 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0319 19:11:49.818418 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0319 19:11:49.887489 521487 cri.go:89] found id: "12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8"
I0319 19:11:49.887513 521487 cri.go:89] found id: ""
I0319 19:11:49.887522 521487 logs.go:282] 1 containers: [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8]
I0319 19:11:49.887590 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.891950 521487 logs.go:123] Gathering logs for kube-proxy [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587] ...
I0319 19:11:49.891975 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587"
I0319 19:11:49.947849 521487 logs.go:123] Gathering logs for kube-proxy [3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c] ...
I0319 19:11:49.947880 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:11:50.004674 521487 logs.go:123] Gathering logs for kube-controller-manager [df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc] ...
I0319 19:11:50.004704 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:11:50.081391 521487 logs.go:123] Gathering logs for containerd ...
I0319 19:11:50.081431 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0319 19:11:50.165806 521487 logs.go:123] Gathering logs for kubelet ...
I0319 19:11:50.165855 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0319 19:11:50.254447 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.870626 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-gnznl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-gnznl" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.254700 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.872945 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.254952 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.873232 662 reflector.go:138] object-"kube-system"/"coredns-token-mptx4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mptx4" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.255208 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.880644 662 reflector.go:138] object-"kube-system"/"kindnet-token-w7f6q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-w7f6q" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.255472 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.881550 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-wx6lx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-wx6lx" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.255727 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.884701 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.260319 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:19 old-k8s-version-908523 kubelet[662]: E0319 19:06:19.002056 662 reflector.go:138] object-"default"/"default-token-2xksl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2xksl" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.260613 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:19 old-k8s-version-908523 kubelet[662]: E0319 19:06:19.003962 662 reflector.go:138] object-"kube-system"/"metrics-server-token-rqzd4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-rqzd4" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.269670 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:20 old-k8s-version-908523 kubelet[662]: E0319 19:06:20.811808 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.270149 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:21 old-k8s-version-908523 kubelet[662]: E0319 19:06:21.265120 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.274455 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:37 old-k8s-version-908523 kubelet[662]: E0319 19:06:37.104000 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.274903 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:37 old-k8s-version-908523 kubelet[662]: E0319 19:06:37.880129 662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-hd7qz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-hd7qz" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.276958 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:49 old-k8s-version-908523 kubelet[662]: E0319 19:06:49.389186 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.277343 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:50 old-k8s-version-908523 kubelet[662]: E0319 19:06:50.396880 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.277901 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:51 old-k8s-version-908523 kubelet[662]: E0319 19:06:51.091415 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.278239 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:57 old-k8s-version-908523 kubelet[662]: E0319 19:06:57.352720 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.281268 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:02 old-k8s-version-908523 kubelet[662]: E0319 19:07:02.111155 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.281911 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:10 old-k8s-version-908523 kubelet[662]: E0319 19:07:10.454369 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.282098 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:17 old-k8s-version-908523 kubelet[662]: E0319 19:07:17.091490 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.282430 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:17 old-k8s-version-908523 kubelet[662]: E0319 19:07:17.352786 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.282616 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:28 old-k8s-version-908523 kubelet[662]: E0319 19:07:28.091504 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.282944 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:29 old-k8s-version-908523 kubelet[662]: E0319 19:07:29.090789 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.283534 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:41 old-k8s-version-908523 kubelet[662]: E0319 19:07:41.541724 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.283723 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:42 old-k8s-version-908523 kubelet[662]: E0319 19:07:42.092835 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.284083 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:47 old-k8s-version-908523 kubelet[662]: E0319 19:07:47.352704 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.286693 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:54 old-k8s-version-908523 kubelet[662]: E0319 19:07:54.103301 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.287026 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:01 old-k8s-version-908523 kubelet[662]: E0319 19:08:01.090920 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.287212 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:08 old-k8s-version-908523 kubelet[662]: E0319 19:08:08.094609 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.287542 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:15 old-k8s-version-908523 kubelet[662]: E0319 19:08:15.090816 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.287759 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:21 old-k8s-version-908523 kubelet[662]: E0319 19:08:21.091743 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.288365 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:28 old-k8s-version-908523 kubelet[662]: E0319 19:08:28.691752 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.288590 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:33 old-k8s-version-908523 kubelet[662]: E0319 19:08:33.091096 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.288949 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:37 old-k8s-version-908523 kubelet[662]: E0319 19:08:37.352894 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.289141 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:46 old-k8s-version-908523 kubelet[662]: E0319 19:08:46.091350 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.289470 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:49 old-k8s-version-908523 kubelet[662]: E0319 19:08:49.090767 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.289716 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:57 old-k8s-version-908523 kubelet[662]: E0319 19:08:57.091105 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.290048 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:03 old-k8s-version-908523 kubelet[662]: E0319 19:09:03.090799 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.290235 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:09 old-k8s-version-908523 kubelet[662]: E0319 19:09:09.091236 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.290564 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:14 old-k8s-version-908523 kubelet[662]: E0319 19:09:14.093204 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.293087 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:24 old-k8s-version-908523 kubelet[662]: E0319 19:09:24.099584 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.293455 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:29 old-k8s-version-908523 kubelet[662]: E0319 19:09:29.090926 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.293643 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:39 old-k8s-version-908523 kubelet[662]: E0319 19:09:39.091384 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.294012 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:42 old-k8s-version-908523 kubelet[662]: E0319 19:09:42.091266 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.294263 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:52 old-k8s-version-908523 kubelet[662]: E0319 19:09:52.095436 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.294857 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:55 old-k8s-version-908523 kubelet[662]: E0319 19:09:55.921112 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.295208 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:57 old-k8s-version-908523 kubelet[662]: E0319 19:09:57.353126 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.295397 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:07 old-k8s-version-908523 kubelet[662]: E0319 19:10:07.091358 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.295738 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:12 old-k8s-version-908523 kubelet[662]: E0319 19:10:12.098552 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.295927 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:21 old-k8s-version-908523 kubelet[662]: E0319 19:10:21.092107 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.296296 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:25 old-k8s-version-908523 kubelet[662]: E0319 19:10:25.090826 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.296492 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:32 old-k8s-version-908523 kubelet[662]: E0319 19:10:32.096004 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.296949 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:37 old-k8s-version-908523 kubelet[662]: E0319 19:10:37.090777 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.297138 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:43 old-k8s-version-908523 kubelet[662]: E0319 19:10:43.091133 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.297470 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:52 old-k8s-version-908523 kubelet[662]: E0319 19:10:52.095231 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.297656 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:54 old-k8s-version-908523 kubelet[662]: E0319 19:10:54.091203 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.297841 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:06 old-k8s-version-908523 kubelet[662]: E0319 19:11:06.092747 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.298170 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:07 old-k8s-version-908523 kubelet[662]: E0319 19:11:07.090893 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.298502 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:19 old-k8s-version-908523 kubelet[662]: E0319 19:11:19.091499 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.298714 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:21 old-k8s-version-908523 kubelet[662]: E0319 19:11:21.091450 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.299067 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: E0319 19:11:30.091870 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.299256 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.299643 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.299861 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0319 19:11:50.299875 521487 logs.go:123] Gathering logs for describe nodes ...
I0319 19:11:50.299889 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0319 19:11:50.519556 521487 logs.go:123] Gathering logs for etcd [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432] ...
I0319 19:11:50.519647 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432"
I0319 19:11:50.577633 521487 logs.go:123] Gathering logs for coredns [45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a] ...
I0319 19:11:50.577707 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:11:50.644683 521487 logs.go:123] Gathering logs for kube-scheduler [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0] ...
I0319 19:11:50.644776 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0"
I0319 19:11:50.708784 521487 logs.go:123] Gathering logs for kube-scheduler [49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4] ...
I0319 19:11:50.708924 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:11:50.783758 521487 logs.go:123] Gathering logs for kubernetes-dashboard [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8] ...
I0319 19:11:50.783841 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8"
I0319 19:11:50.840618 521487 logs.go:123] Gathering logs for kube-apiserver [b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0] ...
I0319 19:11:50.840704 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:11:50.936153 521487 logs.go:123] Gathering logs for kube-controller-manager [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2] ...
I0319 19:11:50.936263 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2"
I0319 19:11:51.014675 521487 logs.go:123] Gathering logs for kindnet [ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657] ...
I0319 19:11:51.014818 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:11:51.075839 521487 logs.go:123] Gathering logs for storage-provisioner [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc] ...
I0319 19:11:51.075938 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc"
I0319 19:11:51.144397 521487 logs.go:123] Gathering logs for container status ...
I0319 19:11:51.144503 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0319 19:11:51.224616 521487 logs.go:123] Gathering logs for dmesg ...
I0319 19:11:51.224695 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0319 19:11:51.246141 521487 logs.go:123] Gathering logs for kube-apiserver [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b] ...
I0319 19:11:51.246220 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b"
I0319 19:11:51.343408 521487 logs.go:123] Gathering logs for etcd [590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe] ...
I0319 19:11:51.343492 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:11:51.414566 521487 logs.go:123] Gathering logs for coredns [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9] ...
I0319 19:11:51.414644 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9"
I0319 19:11:51.466923 521487 logs.go:123] Gathering logs for kindnet [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934] ...
I0319 19:11:51.466996 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934"
I0319 19:11:51.545942 521487 logs.go:123] Gathering logs for storage-provisioner [f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec] ...
I0319 19:11:51.546017 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:11:51.666420 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:11:51.666495 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0319 19:11:51.666575 521487 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0319 19:11:51.666625 521487 out.go:270] Mar 19 19:11:21 old-k8s-version-908523 kubelet[662]: E0319 19:11:21.091450 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:21 old-k8s-version-908523 kubelet[662]: E0319 19:11:21.091450 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:51.666676 521487 out.go:270] Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: E0319 19:11:30.091870 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: E0319 19:11:30.091870 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:51.666732 521487 out.go:270] Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:51.666769 521487 out.go:270] Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:51.666818 521487 out.go:270] Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0319 19:11:51.666848 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:11:51.666888 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 19:12:01.670806 521487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0319 19:12:01.688874 521487 api_server.go:72] duration metric: took 5m59.473606709s to wait for apiserver process to appear ...
I0319 19:12:01.688897 521487 api_server.go:88] waiting for apiserver healthz status ...
I0319 19:12:01.688933 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0319 19:12:01.688992 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0319 19:12:01.740452 521487 cri.go:89] found id: "4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b"
I0319 19:12:01.740472 521487 cri.go:89] found id: "b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:12:01.740477 521487 cri.go:89] found id: ""
I0319 19:12:01.740485 521487 logs.go:282] 2 containers: [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0]
I0319 19:12:01.740638 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.744921 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.748804 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0319 19:12:01.748898 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0319 19:12:01.792172 521487 cri.go:89] found id: "9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432"
I0319 19:12:01.792197 521487 cri.go:89] found id: "590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:12:01.792202 521487 cri.go:89] found id: ""
I0319 19:12:01.792212 521487 logs.go:282] 2 containers: [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432 590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe]
I0319 19:12:01.792275 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.796223 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.800123 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0319 19:12:01.800200 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0319 19:12:01.847507 521487 cri.go:89] found id: "bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9"
I0319 19:12:01.847533 521487 cri.go:89] found id: "45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:12:01.847538 521487 cri.go:89] found id: ""
I0319 19:12:01.847546 521487 logs.go:282] 2 containers: [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9 45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a]
I0319 19:12:01.847606 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.852014 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.856137 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0319 19:12:01.856221 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0319 19:12:01.909386 521487 cri.go:89] found id: "c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0"
I0319 19:12:01.909413 521487 cri.go:89] found id: "49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:12:01.909419 521487 cri.go:89] found id: ""
I0319 19:12:01.909426 521487 logs.go:282] 2 containers: [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0 49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4]
I0319 19:12:01.909487 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.913873 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.917717 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0319 19:12:01.917798 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0319 19:12:01.959483 521487 cri.go:89] found id: "5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587"
I0319 19:12:01.959560 521487 cri.go:89] found id: "3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:12:01.959597 521487 cri.go:89] found id: ""
I0319 19:12:01.959631 521487 logs.go:282] 2 containers: [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587 3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c]
I0319 19:12:01.959724 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.963532 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.967259 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0319 19:12:01.967374 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0319 19:12:02.015881 521487 cri.go:89] found id: "06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2"
I0319 19:12:02.015909 521487 cri.go:89] found id: "df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:12:02.015916 521487 cri.go:89] found id: ""
I0319 19:12:02.015924 521487 logs.go:282] 2 containers: [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2 df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc]
I0319 19:12:02.015984 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.020052 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.023598 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0319 19:12:02.023687 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0319 19:12:02.074970 521487 cri.go:89] found id: "c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934"
I0319 19:12:02.074998 521487 cri.go:89] found id: "ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:12:02.075003 521487 cri.go:89] found id: ""
I0319 19:12:02.075012 521487 logs.go:282] 2 containers: [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934 ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657]
I0319 19:12:02.075079 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.079064 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.083205 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0319 19:12:02.083295 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0319 19:12:02.141008 521487 cri.go:89] found id: "12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8"
I0319 19:12:02.141039 521487 cri.go:89] found id: ""
I0319 19:12:02.141048 521487 logs.go:282] 1 containers: [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8]
I0319 19:12:02.141114 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.145062 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0319 19:12:02.145159 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0319 19:12:02.186897 521487 cri.go:89] found id: "8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc"
I0319 19:12:02.186924 521487 cri.go:89] found id: "f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:12:02.186929 521487 cri.go:89] found id: ""
I0319 19:12:02.186937 521487 logs.go:282] 2 containers: [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec]
I0319 19:12:02.186996 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.190758 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.194646 521487 logs.go:123] Gathering logs for kube-apiserver [b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0] ...
I0319 19:12:02.194729 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:12:02.260079 521487 logs.go:123] Gathering logs for etcd [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432] ...
I0319 19:12:02.260119 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432"
I0319 19:12:02.308149 521487 logs.go:123] Gathering logs for coredns [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9] ...
I0319 19:12:02.308179 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9"
I0319 19:12:02.355515 521487 logs.go:123] Gathering logs for coredns [45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a] ...
I0319 19:12:02.355541 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:12:02.395216 521487 logs.go:123] Gathering logs for kube-controller-manager [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2] ...
I0319 19:12:02.395248 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2"
I0319 19:12:02.481469 521487 logs.go:123] Gathering logs for storage-provisioner [f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec] ...
I0319 19:12:02.481507 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:12:02.528694 521487 logs.go:123] Gathering logs for containerd ...
I0319 19:12:02.528723 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0319 19:12:02.582259 521487 logs.go:123] Gathering logs for describe nodes ...
I0319 19:12:02.582296 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0319 19:12:02.730960 521487 logs.go:123] Gathering logs for kube-scheduler [49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4] ...
I0319 19:12:02.730996 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:12:02.773486 521487 logs.go:123] Gathering logs for kube-proxy [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587] ...
I0319 19:12:02.773521 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587"
I0319 19:12:02.833819 521487 logs.go:123] Gathering logs for kindnet [ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657] ...
I0319 19:12:02.833847 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:12:02.892073 521487 logs.go:123] Gathering logs for container status ...
I0319 19:12:02.892103 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0319 19:12:02.947033 521487 logs.go:123] Gathering logs for dmesg ...
I0319 19:12:02.947064 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0319 19:12:02.964937 521487 logs.go:123] Gathering logs for kindnet [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934] ...
I0319 19:12:02.964964 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934"
I0319 19:12:03.025385 521487 logs.go:123] Gathering logs for storage-provisioner [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc] ...
I0319 19:12:03.025414 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc"
I0319 19:12:03.078094 521487 logs.go:123] Gathering logs for kubelet ...
I0319 19:12:03.078119 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0319 19:12:03.127371 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.870626 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-gnznl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-gnznl" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.127594 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.872945 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.127813 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.873232 662 reflector.go:138] object-"kube-system"/"coredns-token-mptx4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mptx4" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.128025 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.880644 662 reflector.go:138] object-"kube-system"/"kindnet-token-w7f6q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-w7f6q" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.128239 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.881550 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-wx6lx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-wx6lx" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.128443 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.884701 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.132335 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:19 old-k8s-version-908523 kubelet[662]: E0319 19:06:19.002056 662 reflector.go:138] object-"default"/"default-token-2xksl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2xksl" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.132630 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:19 old-k8s-version-908523 kubelet[662]: E0319 19:06:19.003962 662 reflector.go:138] object-"kube-system"/"metrics-server-token-rqzd4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-rqzd4" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.140667 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:20 old-k8s-version-908523 kubelet[662]: E0319 19:06:20.811808 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.141069 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:21 old-k8s-version-908523 kubelet[662]: E0319 19:06:21.265120 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.144417 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:37 old-k8s-version-908523 kubelet[662]: E0319 19:06:37.104000 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.144926 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:37 old-k8s-version-908523 kubelet[662]: E0319 19:06:37.880129 662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-hd7qz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-hd7qz" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.146826 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:49 old-k8s-version-908523 kubelet[662]: E0319 19:06:49.389186 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.147155 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:50 old-k8s-version-908523 kubelet[662]: E0319 19:06:50.396880 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.147676 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:51 old-k8s-version-908523 kubelet[662]: E0319 19:06:51.091415 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.148009 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:57 old-k8s-version-908523 kubelet[662]: E0319 19:06:57.352720 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.150826 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:02 old-k8s-version-908523 kubelet[662]: E0319 19:07:02.111155 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.151421 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:10 old-k8s-version-908523 kubelet[662]: E0319 19:07:10.454369 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.151606 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:17 old-k8s-version-908523 kubelet[662]: E0319 19:07:17.091490 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.151942 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:17 old-k8s-version-908523 kubelet[662]: E0319 19:07:17.352786 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.152127 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:28 old-k8s-version-908523 kubelet[662]: E0319 19:07:28.091504 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.152453 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:29 old-k8s-version-908523 kubelet[662]: E0319 19:07:29.090789 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.153044 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:41 old-k8s-version-908523 kubelet[662]: E0319 19:07:41.541724 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.153230 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:42 old-k8s-version-908523 kubelet[662]: E0319 19:07:42.092835 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.153591 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:47 old-k8s-version-908523 kubelet[662]: E0319 19:07:47.352704 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.156049 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:54 old-k8s-version-908523 kubelet[662]: E0319 19:07:54.103301 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.156375 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:01 old-k8s-version-908523 kubelet[662]: E0319 19:08:01.090920 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.156568 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:08 old-k8s-version-908523 kubelet[662]: E0319 19:08:08.094609 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.156895 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:15 old-k8s-version-908523 kubelet[662]: E0319 19:08:15.090816 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.157078 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:21 old-k8s-version-908523 kubelet[662]: E0319 19:08:21.091743 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.157664 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:28 old-k8s-version-908523 kubelet[662]: E0319 19:08:28.691752 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.157849 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:33 old-k8s-version-908523 kubelet[662]: E0319 19:08:33.091096 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.158177 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:37 old-k8s-version-908523 kubelet[662]: E0319 19:08:37.352894 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.158362 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:46 old-k8s-version-908523 kubelet[662]: E0319 19:08:46.091350 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.158689 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:49 old-k8s-version-908523 kubelet[662]: E0319 19:08:49.090767 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.158929 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:57 old-k8s-version-908523 kubelet[662]: E0319 19:08:57.091105 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.159257 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:03 old-k8s-version-908523 kubelet[662]: E0319 19:09:03.090799 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.159441 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:09 old-k8s-version-908523 kubelet[662]: E0319 19:09:09.091236 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.159771 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:14 old-k8s-version-908523 kubelet[662]: E0319 19:09:14.093204 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.162205 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:24 old-k8s-version-908523 kubelet[662]: E0319 19:09:24.099584 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.162533 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:29 old-k8s-version-908523 kubelet[662]: E0319 19:09:29.090926 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.162717 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:39 old-k8s-version-908523 kubelet[662]: E0319 19:09:39.091384 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.163044 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:42 old-k8s-version-908523 kubelet[662]: E0319 19:09:42.091266 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.163227 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:52 old-k8s-version-908523 kubelet[662]: E0319 19:09:52.095436 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.163817 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:55 old-k8s-version-908523 kubelet[662]: E0319 19:09:55.921112 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.164142 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:57 old-k8s-version-908523 kubelet[662]: E0319 19:09:57.353126 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.164325 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:07 old-k8s-version-908523 kubelet[662]: E0319 19:10:07.091358 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.164654 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:12 old-k8s-version-908523 kubelet[662]: E0319 19:10:12.098552 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.164838 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:21 old-k8s-version-908523 kubelet[662]: E0319 19:10:21.092107 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.165187 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:25 old-k8s-version-908523 kubelet[662]: E0319 19:10:25.090826 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.165376 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:32 old-k8s-version-908523 kubelet[662]: E0319 19:10:32.096004 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.165701 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:37 old-k8s-version-908523 kubelet[662]: E0319 19:10:37.090777 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.165885 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:43 old-k8s-version-908523 kubelet[662]: E0319 19:10:43.091133 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.166214 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:52 old-k8s-version-908523 kubelet[662]: E0319 19:10:52.095231 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.166397 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:54 old-k8s-version-908523 kubelet[662]: E0319 19:10:54.091203 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.166580 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:06 old-k8s-version-908523 kubelet[662]: E0319 19:11:06.092747 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.166912 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:07 old-k8s-version-908523 kubelet[662]: E0319 19:11:07.090893 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.167238 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:19 old-k8s-version-908523 kubelet[662]: E0319 19:11:19.091499 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.167421 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:21 old-k8s-version-908523 kubelet[662]: E0319 19:11:21.091450 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.167750 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: E0319 19:11:30.091870 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.167938 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.168264 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.168447 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.168778 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:58 old-k8s-version-908523 kubelet[662]: E0319 19:11:58.090729 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.168962 521487 logs.go:138] Found kubelet problem: Mar 19 19:12:02 old-k8s-version-908523 kubelet[662]: E0319 19:12:02.093089 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0319 19:12:03.168973 521487 logs.go:123] Gathering logs for etcd [590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe] ...
I0319 19:12:03.168988 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:12:03.214290 521487 logs.go:123] Gathering logs for kube-scheduler [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0] ...
I0319 19:12:03.214322 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0"
I0319 19:12:03.266468 521487 logs.go:123] Gathering logs for kube-proxy [3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c] ...
I0319 19:12:03.266496 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:12:03.313812 521487 logs.go:123] Gathering logs for kube-controller-manager [df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc] ...
I0319 19:12:03.313838 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:12:03.383425 521487 logs.go:123] Gathering logs for kubernetes-dashboard [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8] ...
I0319 19:12:03.383465 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8"
I0319 19:12:03.427288 521487 logs.go:123] Gathering logs for kube-apiserver [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b] ...
I0319 19:12:03.427317 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b"
I0319 19:12:03.503630 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:12:03.503664 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0319 19:12:03.503734 521487 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0319 19:12:03.503749 521487 out.go:270] Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.503756 521487 out.go:270] Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.503767 521487 out.go:270] Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.503776 521487 out.go:270] Mar 19 19:11:58 old-k8s-version-908523 kubelet[662]: E0319 19:11:58.090729 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:11:58 old-k8s-version-908523 kubelet[662]: E0319 19:11:58.090729 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.503833 521487 out.go:270] Mar 19 19:12:02 old-k8s-version-908523 kubelet[662]: E0319 19:12:02.093089 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:12:02 old-k8s-version-908523 kubelet[662]: E0319 19:12:02.093089 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0319 19:12:03.503840 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:12:03.503858 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 19:12:13.505192 521487 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0319 19:12:13.516184 521487 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0319 19:12:13.517722 521487 out.go:201]
W0319 19:12:13.519090 521487 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0319 19:12:13.519125 521487 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0319 19:12:13.519144 521487 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0319 19:12:13.519150 521487 out.go:270] *
*
W0319 19:12:13.520057 521487 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0319 19:12:13.521041 521487 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-908523 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-908523
helpers_test.go:235: (dbg) docker inspect old-k8s-version-908523:
-- stdout --
[
{
"Id": "0c371570b2a7ec714443dbc0b22a19ed384deebfb5d9b127b960bf75458e0da9",
"Created": "2025-03-19T19:03:03.925958092Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 521613,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-03-19T19:05:55.202593526Z",
"FinishedAt": "2025-03-19T19:05:54.348726517Z"
},
"Image": "sha256:df0c2544fb3106b890f0a9ab81fcf49f97edb092b83e47f42288ad5dfe1f4b40",
"ResolvConfPath": "/var/lib/docker/containers/0c371570b2a7ec714443dbc0b22a19ed384deebfb5d9b127b960bf75458e0da9/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0c371570b2a7ec714443dbc0b22a19ed384deebfb5d9b127b960bf75458e0da9/hostname",
"HostsPath": "/var/lib/docker/containers/0c371570b2a7ec714443dbc0b22a19ed384deebfb5d9b127b960bf75458e0da9/hosts",
"LogPath": "/var/lib/docker/containers/0c371570b2a7ec714443dbc0b22a19ed384deebfb5d9b127b960bf75458e0da9/0c371570b2a7ec714443dbc0b22a19ed384deebfb5d9b127b960bf75458e0da9-json.log",
"Name": "/old-k8s-version-908523",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-908523:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-908523",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "0c371570b2a7ec714443dbc0b22a19ed384deebfb5d9b127b960bf75458e0da9",
"LowerDir": "/var/lib/docker/overlay2/59c7db465d64bf48e40867fd884024d31ae0a2ba9a626778e3e0e74579eb53e7-init/diff:/var/lib/docker/overlay2/a87a6448235239b9d81e5ae7fe7c0657a7af9a304403ad425b2961de2bc3013f/diff",
"MergedDir": "/var/lib/docker/overlay2/59c7db465d64bf48e40867fd884024d31ae0a2ba9a626778e3e0e74579eb53e7/merged",
"UpperDir": "/var/lib/docker/overlay2/59c7db465d64bf48e40867fd884024d31ae0a2ba9a626778e3e0e74579eb53e7/diff",
"WorkDir": "/var/lib/docker/overlay2/59c7db465d64bf48e40867fd884024d31ae0a2ba9a626778e3e0e74579eb53e7/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-908523",
"Source": "/var/lib/docker/volumes/old-k8s-version-908523/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-908523",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-908523",
"name.minikube.sigs.k8s.io": "old-k8s-version-908523",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "b701acb2d872166893652f7121c3c35707020d17df8ca7019005759b516a43b7",
"SandboxKey": "/var/run/docker/netns/b701acb2d872",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33438"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33439"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33442"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33440"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33441"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-908523": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "e2:58:1f:f9:3d:2d",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "173ad91db272a5a74c79a03a9cabe655c396e33c057b88c49fb0c5e318f654e0",
"EndpointID": "46a60f6b0e4c83b74fb88ad447661a54291046b4ca53879bf332c11d8a48bc4f",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-908523",
"0c371570b2a7"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-908523 -n old-k8s-version-908523
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-908523 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-908523 logs -n 25: (1.944418233s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| ssh | cert-options-528375 ssh | cert-options-528375 | jenkins | v1.35.0 | 19 Mar 25 19:02 UTC | 19 Mar 25 19:02 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-528375 -- sudo | cert-options-528375 | jenkins | v1.35.0 | 19 Mar 25 19:02 UTC | 19 Mar 25 19:02 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-528375 | cert-options-528375 | jenkins | v1.35.0 | 19 Mar 25 19:02 UTC | 19 Mar 25 19:02 UTC |
| start | -p old-k8s-version-908523 | old-k8s-version-908523 | jenkins | v1.35.0 | 19 Mar 25 19:02 UTC | 19 Mar 25 19:05 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-335750 | cert-expiration-335750 | jenkins | v1.35.0 | 19 Mar 25 19:03 UTC | 19 Mar 25 19:04 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-335750 | cert-expiration-335750 | jenkins | v1.35.0 | 19 Mar 25 19:04 UTC | 19 Mar 25 19:04 UTC |
| start | -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:04 UTC | 19 Mar 25 19:05 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:05 UTC | 19 Mar 25 19:05 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:05 UTC | 19 Mar 25 19:05 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:05 UTC | 19 Mar 25 19:05 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:05 UTC | 19 Mar 25 19:09 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-908523 | old-k8s-version-908523 | jenkins | v1.35.0 | 19 Mar 25 19:05 UTC | 19 Mar 25 19:05 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-908523 | old-k8s-version-908523 | jenkins | v1.35.0 | 19 Mar 25 19:05 UTC | 19 Mar 25 19:05 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-908523 | old-k8s-version-908523 | jenkins | v1.35.0 | 19 Mar 25 19:05 UTC | 19 Mar 25 19:05 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-908523 | old-k8s-version-908523 | jenkins | v1.35.0 | 19 Mar 25 19:05 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| image | no-preload-441624 image list | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:10 UTC | 19 Mar 25 19:10 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:10 UTC | 19 Mar 25 19:10 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:10 UTC | 19 Mar 25 19:10 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:10 UTC | 19 Mar 25 19:10 UTC |
| delete | -p no-preload-441624 | no-preload-441624 | jenkins | v1.35.0 | 19 Mar 25 19:10 UTC | 19 Mar 25 19:10 UTC |
| start | -p embed-certs-728826 | embed-certs-728826 | jenkins | v1.35.0 | 19 Mar 25 19:10 UTC | 19 Mar 25 19:11 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p embed-certs-728826 | embed-certs-728826 | jenkins | v1.35.0 | 19 Mar 25 19:11 UTC | 19 Mar 25 19:11 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p embed-certs-728826 | embed-certs-728826 | jenkins | v1.35.0 | 19 Mar 25 19:11 UTC | 19 Mar 25 19:11 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p embed-certs-728826 | embed-certs-728826 | jenkins | v1.35.0 | 19 Mar 25 19:11 UTC | 19 Mar 25 19:11 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-728826 | embed-certs-728826 | jenkins | v1.35.0 | 19 Mar 25 19:11 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/03/19 19:11:31
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0319 19:11:31.155065 531208 out.go:345] Setting OutFile to fd 1 ...
I0319 19:11:31.155305 531208 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 19:11:31.155339 531208 out.go:358] Setting ErrFile to fd 2...
I0319 19:11:31.155360 531208 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 19:11:31.155660 531208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-300569/.minikube/bin
I0319 19:11:31.156091 531208 out.go:352] Setting JSON to false
I0319 19:11:31.157164 531208 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10425,"bootTime":1742401066,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0319 19:11:31.157267 531208 start.go:139] virtualization:
I0319 19:11:31.158939 531208 out.go:177] * [embed-certs-728826] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0319 19:11:31.160291 531208 out.go:177] - MINIKUBE_LOCATION=20544
I0319 19:11:31.160415 531208 notify.go:220] Checking for updates...
I0319 19:11:31.163108 531208 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0319 19:11:31.164897 531208 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20544-300569/kubeconfig
I0319 19:11:31.166217 531208 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-300569/.minikube
I0319 19:11:31.167656 531208 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0319 19:11:31.169000 531208 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0319 19:11:31.170781 531208 config.go:182] Loaded profile config "embed-certs-728826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0319 19:11:31.171332 531208 driver.go:394] Setting default libvirt URI to qemu:///system
I0319 19:11:31.206266 531208 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
I0319 19:11:31.206393 531208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0319 19:11:31.299044 531208 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-19 19:11:31.289660932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0319 19:11:31.299157 531208 docker.go:318] overlay module found
I0319 19:11:31.300775 531208 out.go:177] * Using the docker driver based on existing profile
I0319 19:11:31.301842 531208 start.go:297] selected driver: docker
I0319 19:11:31.301870 531208 start.go:901] validating driver "docker" against &{Name:embed-certs-728826 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728826 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0319 19:11:31.301962 531208 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0319 19:11:31.302669 531208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0319 19:11:31.387176 531208 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-19 19:11:31.375387441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0319 19:11:31.387633 531208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0319 19:11:31.387686 531208 cni.go:84] Creating CNI manager for ""
I0319 19:11:31.387754 531208 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0319 19:11:31.387831 531208 start.go:340] cluster config:
{Name:embed-certs-728826 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0319 19:11:31.389662 531208 out.go:177] * Starting "embed-certs-728826" primary control-plane node in "embed-certs-728826" cluster
I0319 19:11:31.391191 531208 cache.go:121] Beginning downloading kic base image for docker with containerd
I0319 19:11:31.392953 531208 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0319 19:11:31.394700 531208 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0319 19:11:31.394761 531208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20544-300569/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
I0319 19:11:31.394773 531208 cache.go:56] Caching tarball of preloaded images
I0319 19:11:31.394784 531208 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0319 19:11:31.394857 531208 preload.go:172] Found /home/jenkins/minikube-integration/20544-300569/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0319 19:11:31.394867 531208 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0319 19:11:31.394986 531208 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/config.json ...
I0319 19:11:31.414111 531208 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0319 19:11:31.414137 531208 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0319 19:11:31.414156 531208 cache.go:230] Successfully downloaded all kic artifacts
I0319 19:11:31.414179 531208 start.go:360] acquireMachinesLock for embed-certs-728826: {Name:mk16817ba76488daa486d7e6042cb5912c221be7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0319 19:11:31.414249 531208 start.go:364] duration metric: took 47.435µs to acquireMachinesLock for "embed-certs-728826"
I0319 19:11:31.414280 531208 start.go:96] Skipping create...Using existing machine configuration
I0319 19:11:31.414290 531208 fix.go:54] fixHost starting:
I0319 19:11:31.414551 531208 cli_runner.go:164] Run: docker container inspect embed-certs-728826 --format={{.State.Status}}
I0319 19:11:31.431344 531208 fix.go:112] recreateIfNeeded on embed-certs-728826: state=Stopped err=<nil>
W0319 19:11:31.431376 531208 fix.go:138] unexpected machine state, will restart: <nil>
I0319 19:11:31.432915 531208 out.go:177] * Restarting existing docker container for "embed-certs-728826" ...
I0319 19:11:30.305144 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:32.801244 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:34.804777 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:31.434250 531208 cli_runner.go:164] Run: docker start embed-certs-728826
I0319 19:11:31.661263 531208 cli_runner.go:164] Run: docker container inspect embed-certs-728826 --format={{.State.Status}}
I0319 19:11:31.683561 531208 kic.go:430] container "embed-certs-728826" state is running.
I0319 19:11:31.686538 531208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-728826
I0319 19:11:31.710261 531208 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/config.json ...
I0319 19:11:31.710477 531208 machine.go:93] provisionDockerMachine start ...
I0319 19:11:31.710534 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:31.733545 531208 main.go:141] libmachine: Using SSH client type: native
I0319 19:11:31.734056 531208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33448 <nil> <nil>}
I0319 19:11:31.734074 531208 main.go:141] libmachine: About to run SSH command:
hostname
I0319 19:11:31.734607 531208 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49716->127.0.0.1:33448: read: connection reset by peer
I0319 19:11:34.864264 531208 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-728826
I0319 19:11:34.864291 531208 ubuntu.go:169] provisioning hostname "embed-certs-728826"
I0319 19:11:34.864368 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:34.887350 531208 main.go:141] libmachine: Using SSH client type: native
I0319 19:11:34.887675 531208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33448 <nil> <nil>}
I0319 19:11:34.887695 531208 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-728826 && echo "embed-certs-728826" | sudo tee /etc/hostname
I0319 19:11:35.025504 531208 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-728826
I0319 19:11:35.025583 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:35.044746 531208 main.go:141] libmachine: Using SSH client type: native
I0319 19:11:35.045076 531208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33448 <nil> <nil>}
I0319 19:11:35.045098 531208 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-728826' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-728826/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-728826' | sudo tee -a /etc/hosts;
fi
fi
I0319 19:11:35.172920 531208 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0319 19:11:35.172948 531208 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20544-300569/.minikube CaCertPath:/home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20544-300569/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20544-300569/.minikube}
I0319 19:11:35.172979 531208 ubuntu.go:177] setting up certificates
I0319 19:11:35.172989 531208 provision.go:84] configureAuth start
I0319 19:11:35.173050 531208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-728826
I0319 19:11:35.192119 531208 provision.go:143] copyHostCerts
I0319 19:11:35.192184 531208 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-300569/.minikube/cert.pem, removing ...
I0319 19:11:35.192201 531208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-300569/.minikube/cert.pem
I0319 19:11:35.192283 531208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20544-300569/.minikube/cert.pem (1123 bytes)
I0319 19:11:35.192420 531208 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-300569/.minikube/key.pem, removing ...
I0319 19:11:35.192425 531208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-300569/.minikube/key.pem
I0319 19:11:35.192452 531208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20544-300569/.minikube/key.pem (1679 bytes)
I0319 19:11:35.192502 531208 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-300569/.minikube/ca.pem, removing ...
I0319 19:11:35.192507 531208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-300569/.minikube/ca.pem
I0319 19:11:35.192528 531208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20544-300569/.minikube/ca.pem (1078 bytes)
I0319 19:11:35.192620 531208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20544-300569/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca-key.pem org=jenkins.embed-certs-728826 san=[127.0.0.1 192.168.76.2 embed-certs-728826 localhost minikube]
I0319 19:11:35.245744 531208 provision.go:177] copyRemoteCerts
I0319 19:11:35.245822 531208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0319 19:11:35.245864 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:35.266032 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:35.362408 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0319 19:11:35.389087 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0319 19:11:35.423910 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0319 19:11:35.448984 531208 provision.go:87] duration metric: took 275.980165ms to configureAuth
I0319 19:11:35.449014 531208 ubuntu.go:193] setting minikube options for container-runtime
I0319 19:11:35.449223 531208 config.go:182] Loaded profile config "embed-certs-728826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0319 19:11:35.449239 531208 machine.go:96] duration metric: took 3.738754375s to provisionDockerMachine
I0319 19:11:35.449249 531208 start.go:293] postStartSetup for "embed-certs-728826" (driver="docker")
I0319 19:11:35.449259 531208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0319 19:11:35.449319 531208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0319 19:11:35.449369 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:35.467775 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:35.557877 531208 ssh_runner.go:195] Run: cat /etc/os-release
I0319 19:11:35.561202 531208 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0319 19:11:35.561240 531208 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0319 19:11:35.561251 531208 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0319 19:11:35.561258 531208 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0319 19:11:35.561268 531208 filesync.go:126] Scanning /home/jenkins/minikube-integration/20544-300569/.minikube/addons for local assets ...
I0319 19:11:35.561328 531208 filesync.go:126] Scanning /home/jenkins/minikube-integration/20544-300569/.minikube/files for local assets ...
I0319 19:11:35.561415 531208 filesync.go:149] local asset: /home/jenkins/minikube-integration/20544-300569/.minikube/files/etc/ssl/certs/3060932.pem -> 3060932.pem in /etc/ssl/certs
I0319 19:11:35.561521 531208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0319 19:11:35.570352 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/files/etc/ssl/certs/3060932.pem --> /etc/ssl/certs/3060932.pem (1708 bytes)
I0319 19:11:35.596072 531208 start.go:296] duration metric: took 146.806216ms for postStartSetup
I0319 19:11:35.596226 531208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0319 19:11:35.596291 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:35.614869 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:35.702094 531208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0319 19:11:35.706908 531208 fix.go:56] duration metric: took 4.292609375s for fixHost
I0319 19:11:35.706946 531208 start.go:83] releasing machines lock for "embed-certs-728826", held for 4.292684585s
I0319 19:11:35.707034 531208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-728826
I0319 19:11:35.724316 531208 ssh_runner.go:195] Run: cat /version.json
I0319 19:11:35.724379 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:35.724652 531208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0319 19:11:35.724715 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:35.746133 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:35.764660 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:35.836099 531208 ssh_runner.go:195] Run: systemctl --version
I0319 19:11:35.974521 531208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0319 19:11:35.979230 531208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0319 19:11:35.998636 531208 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0319 19:11:35.998741 531208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0319 19:11:36.008633 531208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0319 19:11:36.008659 531208 start.go:495] detecting cgroup driver to use...
I0319 19:11:36.008693 531208 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0319 19:11:36.008751 531208 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0319 19:11:36.023933 531208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0319 19:11:36.039652 531208 docker.go:217] disabling cri-docker service (if available) ...
I0319 19:11:36.039724 531208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0319 19:11:36.055221 531208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0319 19:11:36.067677 531208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0319 19:11:36.154084 531208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0319 19:11:36.235192 531208 docker.go:233] disabling docker service ...
I0319 19:11:36.235283 531208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0319 19:11:36.248411 531208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0319 19:11:36.260067 531208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0319 19:11:36.349353 531208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0319 19:11:36.443302 531208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0319 19:11:36.455411 531208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0319 19:11:36.472109 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0319 19:11:36.482189 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0319 19:11:36.491823 531208 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0319 19:11:36.491952 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0319 19:11:36.502400 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0319 19:11:36.512714 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0319 19:11:36.524240 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0319 19:11:36.534518 531208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0319 19:11:36.543905 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0319 19:11:36.553918 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0319 19:11:36.563759 531208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0319 19:11:36.574569 531208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0319 19:11:36.583752 531208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0319 19:11:36.592281 531208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0319 19:11:36.679305 531208 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0319 19:11:36.832237 531208 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0319 19:11:36.832339 531208 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0319 19:11:36.840998 531208 start.go:563] Will wait 60s for crictl version
I0319 19:11:36.841098 531208 ssh_runner.go:195] Run: which crictl
I0319 19:11:36.844905 531208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0319 19:11:36.890559 531208 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0319 19:11:36.890652 531208 ssh_runner.go:195] Run: containerd --version
I0319 19:11:36.913262 531208 ssh_runner.go:195] Run: containerd --version
I0319 19:11:36.936965 531208 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
I0319 19:11:36.938215 531208 cli_runner.go:164] Run: docker network inspect embed-certs-728826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0319 19:11:36.954106 531208 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0319 19:11:36.957708 531208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0319 19:11:36.968528 531208 kubeadm.go:883] updating cluster {Name:embed-certs-728826 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0319 19:11:36.968695 531208 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0319 19:11:36.968757 531208 ssh_runner.go:195] Run: sudo crictl images --output json
I0319 19:11:37.015252 531208 containerd.go:627] all images are preloaded for containerd runtime.
I0319 19:11:37.015279 531208 containerd.go:534] Images already preloaded, skipping extraction
I0319 19:11:37.015362 531208 ssh_runner.go:195] Run: sudo crictl images --output json
I0319 19:11:37.056393 531208 containerd.go:627] all images are preloaded for containerd runtime.
I0319 19:11:37.056419 531208 cache_images.go:84] Images are preloaded, skipping loading
I0319 19:11:37.056428 531208 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 containerd true true} ...
I0319 19:11:37.056534 531208 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-728826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0319 19:11:37.056616 531208 ssh_runner.go:195] Run: sudo crictl info
I0319 19:11:37.097859 531208 cni.go:84] Creating CNI manager for ""
I0319 19:11:37.097887 531208 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0319 19:11:37.097905 531208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0319 19:11:37.097933 531208 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-728826 NodeName:embed-certs-728826 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0319 19:11:37.098048 531208 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-728826"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0319 19:11:37.098129 531208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0319 19:11:37.107952 531208 binaries.go:44] Found k8s binaries, skipping transfer
I0319 19:11:37.108050 531208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0319 19:11:37.117272 531208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0319 19:11:37.136217 531208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0319 19:11:37.155164 531208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0319 19:11:37.174458 531208 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0319 19:11:37.177933 531208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0319 19:11:37.189318 531208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0319 19:11:37.279757 531208 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0319 19:11:37.301555 531208 certs.go:68] Setting up /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826 for IP: 192.168.76.2
I0319 19:11:37.301579 531208 certs.go:194] generating shared ca certs ...
I0319 19:11:37.301611 531208 certs.go:226] acquiring lock for ca certs: {Name:mka72ef37d967cad7bd9325c6ba9f8fdcb24c066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0319 19:11:37.301792 531208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20544-300569/.minikube/ca.key
I0319 19:11:37.301861 531208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20544-300569/.minikube/proxy-client-ca.key
I0319 19:11:37.301877 531208 certs.go:256] generating profile certs ...
I0319 19:11:37.301986 531208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/client.key
I0319 19:11:37.302076 531208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/apiserver.key.1f2a7616
I0319 19:11:37.302146 531208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/proxy-client.key
I0319 19:11:37.302285 531208 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/306093.pem (1338 bytes)
W0319 19:11:37.302340 531208 certs.go:480] ignoring /home/jenkins/minikube-integration/20544-300569/.minikube/certs/306093_empty.pem, impossibly tiny 0 bytes
I0319 19:11:37.302355 531208 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca-key.pem (1675 bytes)
I0319 19:11:37.302383 531208 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/ca.pem (1078 bytes)
I0319 19:11:37.302435 531208 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/cert.pem (1123 bytes)
I0319 19:11:37.302465 531208 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/certs/key.pem (1679 bytes)
I0319 19:11:37.302530 531208 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-300569/.minikube/files/etc/ssl/certs/3060932.pem (1708 bytes)
I0319 19:11:37.303248 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0319 19:11:37.338143 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0319 19:11:37.366036 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0319 19:11:37.395750 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0319 19:11:37.428614 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0319 19:11:37.458986 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0319 19:11:37.489962 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0319 19:11:37.514815 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/profiles/embed-certs-728826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0319 19:11:37.540315 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/files/etc/ssl/certs/3060932.pem --> /usr/share/ca-certificates/3060932.pem (1708 bytes)
I0319 19:11:37.575884 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0319 19:11:37.603518 531208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-300569/.minikube/certs/306093.pem --> /usr/share/ca-certificates/306093.pem (1338 bytes)
I0319 19:11:37.630763 531208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0319 19:11:37.651499 531208 ssh_runner.go:195] Run: openssl version
I0319 19:11:37.657240 531208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3060932.pem && ln -fs /usr/share/ca-certificates/3060932.pem /etc/ssl/certs/3060932.pem"
I0319 19:11:37.666642 531208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3060932.pem
I0319 19:11:37.670180 531208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 18:26 /usr/share/ca-certificates/3060932.pem
I0319 19:11:37.670275 531208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3060932.pem
I0319 19:11:37.677160 531208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3060932.pem /etc/ssl/certs/3ec20f2e.0"
I0319 19:11:37.686117 531208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0319 19:11:37.695658 531208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0319 19:11:37.699375 531208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 18:18 /usr/share/ca-certificates/minikubeCA.pem
I0319 19:11:37.699439 531208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0319 19:11:37.706587 531208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0319 19:11:37.715970 531208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/306093.pem && ln -fs /usr/share/ca-certificates/306093.pem /etc/ssl/certs/306093.pem"
I0319 19:11:37.725989 531208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/306093.pem
I0319 19:11:37.729694 531208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 18:26 /usr/share/ca-certificates/306093.pem
I0319 19:11:37.729769 531208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/306093.pem
I0319 19:11:37.737650 531208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/306093.pem /etc/ssl/certs/51391683.0"
I0319 19:11:37.747102 531208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0319 19:11:37.750864 531208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0319 19:11:37.758530 531208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0319 19:11:37.766027 531208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0319 19:11:37.773213 531208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0319 19:11:37.780385 531208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0319 19:11:37.787506 531208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0319 19:11:37.794646 531208 kubeadm.go:392] StartCluster: {Name:embed-certs-728826 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0319 19:11:37.794780 531208 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0319 19:11:37.794869 531208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0319 19:11:37.834211 531208 cri.go:89] found id: "2e4a554d5ab2b46e910cebb68d8a02ea2dbad90f7100498c83f53e99cdbd9273"
I0319 19:11:37.834309 531208 cri.go:89] found id: "56b561e01c4bfc0994dfdd3276df0fd624a37fa2cc9bfb6bc5c5bfe8283fe28f"
I0319 19:11:37.834330 531208 cri.go:89] found id: "83d452d59943715bfc3625b8ef68ea6e32f87a7d8fdba741ad5959c21ae2b74b"
I0319 19:11:37.834370 531208 cri.go:89] found id: "78830b7fe39756a49f8811724ec1619ce21ce4c5f99e6836c68d7f50dff223d5"
I0319 19:11:37.834401 531208 cri.go:89] found id: "1d2e63992892f26658c2c61bafb8cf481bbcee61d16dac435b0aebc7c873f919"
I0319 19:11:37.834429 531208 cri.go:89] found id: "d6c31c3c8b7a74cd8e4dc51db108d2509cf7480577082fb9ba4b505894e9b897"
I0319 19:11:37.834468 531208 cri.go:89] found id: "566db25a6259d86a623cba967dc9b563b6f5ead9b444ccaed6856904b5500f47"
I0319 19:11:37.834506 531208 cri.go:89] found id: "e7edf17e1e1fd52668621e3e2dc29c6628ac01cf3a7aa1bb1d3811c8b8e71868"
I0319 19:11:37.834545 531208 cri.go:89] found id: ""
I0319 19:11:37.834640 531208 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0319 19:11:37.850650 531208 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-03-19T19:11:37Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0319 19:11:37.850778 531208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0319 19:11:37.863411 531208 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0319 19:11:37.863459 531208 kubeadm.go:593] restartPrimaryControlPlane start ...
I0319 19:11:37.863542 531208 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0319 19:11:37.890238 531208 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0319 19:11:37.890915 531208 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-728826" does not appear in /home/jenkins/minikube-integration/20544-300569/kubeconfig
I0319 19:11:37.891255 531208 kubeconfig.go:62] /home/jenkins/minikube-integration/20544-300569/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-728826" cluster setting kubeconfig missing "embed-certs-728826" context setting]
I0319 19:11:37.891774 531208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-300569/kubeconfig: {Name:mkacba6ab67fe1ca8a3d03569f0055410489e147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0319 19:11:37.893676 531208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0319 19:11:37.905329 531208 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0319 19:11:37.905373 531208 kubeadm.go:597] duration metric: took 41.907268ms to restartPrimaryControlPlane
I0319 19:11:37.905389 531208 kubeadm.go:394] duration metric: took 110.764104ms to StartCluster
I0319 19:11:37.905422 531208 settings.go:142] acquiring lock: {Name:mk92e2d35bdbbf8cdf17aa5c8f2d12a5eb6dbf61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0319 19:11:37.905535 531208 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20544-300569/kubeconfig
I0319 19:11:37.906980 531208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-300569/kubeconfig: {Name:mkacba6ab67fe1ca8a3d03569f0055410489e147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0319 19:11:37.908351 531208 config.go:182] Loaded profile config "embed-certs-728826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0319 19:11:37.908405 531208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0319 19:11:37.908466 531208 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0319 19:11:37.908539 531208 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-728826"
I0319 19:11:37.908582 531208 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-728826"
W0319 19:11:37.908592 531208 addons.go:247] addon storage-provisioner should already be in state true
I0319 19:11:37.908615 531208 host.go:66] Checking if "embed-certs-728826" exists ...
I0319 19:11:37.909097 531208 cli_runner.go:164] Run: docker container inspect embed-certs-728826 --format={{.State.Status}}
I0319 19:11:37.909583 531208 addons.go:69] Setting dashboard=true in profile "embed-certs-728826"
I0319 19:11:37.909646 531208 addons.go:238] Setting addon dashboard=true in "embed-certs-728826"
W0319 19:11:37.909663 531208 addons.go:247] addon dashboard should already be in state true
I0319 19:11:37.909687 531208 host.go:66] Checking if "embed-certs-728826" exists ...
I0319 19:11:37.910194 531208 cli_runner.go:164] Run: docker container inspect embed-certs-728826 --format={{.State.Status}}
I0319 19:11:37.910377 531208 addons.go:69] Setting metrics-server=true in profile "embed-certs-728826"
I0319 19:11:37.910397 531208 addons.go:238] Setting addon metrics-server=true in "embed-certs-728826"
W0319 19:11:37.910404 531208 addons.go:247] addon metrics-server should already be in state true
I0319 19:11:37.910479 531208 host.go:66] Checking if "embed-certs-728826" exists ...
I0319 19:11:37.910977 531208 cli_runner.go:164] Run: docker container inspect embed-certs-728826 --format={{.State.Status}}
I0319 19:11:37.911376 531208 addons.go:69] Setting default-storageclass=true in profile "embed-certs-728826"
I0319 19:11:37.911397 531208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-728826"
I0319 19:11:37.911678 531208 cli_runner.go:164] Run: docker container inspect embed-certs-728826 --format={{.State.Status}}
I0319 19:11:37.914445 531208 out.go:177] * Verifying Kubernetes components...
I0319 19:11:37.917772 531208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0319 19:11:37.953855 531208 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0319 19:11:37.960319 531208 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0319 19:11:37.964410 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0319 19:11:37.964434 531208 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0319 19:11:37.964506 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:37.993774 531208 addons.go:238] Setting addon default-storageclass=true in "embed-certs-728826"
W0319 19:11:37.993797 531208 addons.go:247] addon default-storageclass should already be in state true
I0319 19:11:37.993823 531208 host.go:66] Checking if "embed-certs-728826" exists ...
I0319 19:11:38.001196 531208 cli_runner.go:164] Run: docker container inspect embed-certs-728826 --format={{.State.Status}}
I0319 19:11:38.004789 531208 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0319 19:11:38.008518 531208 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0319 19:11:38.008540 531208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0319 19:11:38.008635 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:38.019041 531208 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0319 19:11:37.301234 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:39.301343 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:38.019309 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:38.023733 531208 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0319 19:11:38.023755 531208 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0319 19:11:38.023823 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:38.048325 531208 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0319 19:11:38.048363 531208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0319 19:11:38.048428 531208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728826
I0319 19:11:38.076758 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:38.089499 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:38.101517 531208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/20544-300569/.minikube/machines/embed-certs-728826/id_rsa Username:docker}
I0319 19:11:38.137756 531208 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0319 19:11:38.210610 531208 node_ready.go:35] waiting up to 6m0s for node "embed-certs-728826" to be "Ready" ...
I0319 19:11:38.303200 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0319 19:11:38.303220 531208 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0319 19:11:38.358358 531208 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0319 19:11:38.358381 531208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0319 19:11:38.382544 531208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0319 19:11:38.389416 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0319 19:11:38.389437 531208 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0319 19:11:38.436527 531208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0319 19:11:38.520302 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0319 19:11:38.520329 531208 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0319 19:11:38.530972 531208 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0319 19:11:38.530999 531208 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0319 19:11:38.629740 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0319 19:11:38.629768 531208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W0319 19:11:38.722636 531208 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0319 19:11:38.722725 531208 retry.go:31] will retry after 285.501527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0319 19:11:38.779387 531208 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0319 19:11:38.779471 531208 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0319 19:11:38.922086 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0319 19:11:38.922158 531208 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W0319 19:11:38.926789 531208 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0319 19:11:38.926867 531208 retry.go:31] will retry after 347.300581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0319 19:11:38.965560 531208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0319 19:11:38.995502 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0319 19:11:38.995581 531208 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0319 19:11:39.009275 531208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0319 19:11:39.132479 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0319 19:11:39.132570 531208 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0319 19:11:39.275217 531208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0319 19:11:39.279129 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0319 19:11:39.279200 531208 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0319 19:11:39.373576 531208 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0319 19:11:39.373651 531208 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0319 19:11:39.456983 531208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0319 19:11:41.800853 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:44.302273 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:42.953070 531208 node_ready.go:49] node "embed-certs-728826" has status "Ready":"True"
I0319 19:11:42.953100 531208 node_ready.go:38] duration metric: took 4.742450029s for node "embed-certs-728826" to be "Ready" ...
I0319 19:11:42.953110 531208 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0319 19:11:42.967752 531208 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zp82z" in "kube-system" namespace to be "Ready" ...
I0319 19:11:42.983899 531208 pod_ready.go:93] pod "coredns-668d6bf9bc-zp82z" in "kube-system" namespace has status "Ready":"True"
I0319 19:11:42.983924 531208 pod_ready.go:82] duration metric: took 16.138099ms for pod "coredns-668d6bf9bc-zp82z" in "kube-system" namespace to be "Ready" ...
I0319 19:11:42.983942 531208 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-728826" in "kube-system" namespace to be "Ready" ...
I0319 19:11:42.989360 531208 pod_ready.go:93] pod "etcd-embed-certs-728826" in "kube-system" namespace has status "Ready":"True"
I0319 19:11:42.989386 531208 pod_ready.go:82] duration metric: took 5.435072ms for pod "etcd-embed-certs-728826" in "kube-system" namespace to be "Ready" ...
I0319 19:11:42.989401 531208 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-728826" in "kube-system" namespace to be "Ready" ...
I0319 19:11:43.004014 531208 pod_ready.go:93] pod "kube-apiserver-embed-certs-728826" in "kube-system" namespace has status "Ready":"True"
I0319 19:11:43.004040 531208 pod_ready.go:82] duration metric: took 14.630943ms for pod "kube-apiserver-embed-certs-728826" in "kube-system" namespace to be "Ready" ...
I0319 19:11:43.004054 531208 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-728826" in "kube-system" namespace to be "Ready" ...
I0319 19:11:43.012411 531208 pod_ready.go:93] pod "kube-controller-manager-embed-certs-728826" in "kube-system" namespace has status "Ready":"True"
I0319 19:11:43.012436 531208 pod_ready.go:82] duration metric: took 8.374872ms for pod "kube-controller-manager-embed-certs-728826" in "kube-system" namespace to be "Ready" ...
I0319 19:11:43.012449 531208 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v7mfj" in "kube-system" namespace to be "Ready" ...
I0319 19:11:43.156229 531208 pod_ready.go:93] pod "kube-proxy-v7mfj" in "kube-system" namespace has status "Ready":"True"
I0319 19:11:43.156258 531208 pod_ready.go:82] duration metric: took 143.801762ms for pod "kube-proxy-v7mfj" in "kube-system" namespace to be "Ready" ...
I0319 19:11:43.156269 531208 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-728826" in "kube-system" namespace to be "Ready" ...
I0319 19:11:43.555218 531208 pod_ready.go:93] pod "kube-scheduler-embed-certs-728826" in "kube-system" namespace has status "Ready":"True"
I0319 19:11:43.555244 531208 pod_ready.go:82] duration metric: took 398.96624ms for pod "kube-scheduler-embed-certs-728826" in "kube-system" namespace to be "Ready" ...
I0319 19:11:43.555256 531208 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace to be "Ready" ...
I0319 19:11:45.451582 531208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.485934606s)
I0319 19:11:45.451837 531208 addons.go:479] Verifying addon metrics-server=true in "embed-certs-728826"
I0319 19:11:45.451744 531208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.442384476s)
I0319 19:11:45.451777 531208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.176486132s)
I0319 19:11:45.565350 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:45.589535 531208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.13245097s)
I0319 19:11:45.592107 531208 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-728826 addons enable metrics-server
I0319 19:11:45.594201 531208 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
I0319 19:11:45.597227 531208 addons.go:514] duration metric: took 7.688760291s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
I0319 19:11:46.801371 521487 pod_ready.go:103] pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:49.296298 521487 pod_ready.go:82] duration metric: took 4m0.00092604s for pod "metrics-server-9975d5f86-rls8x" in "kube-system" namespace to be "Ready" ...
E0319 19:11:49.296327 521487 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0319 19:11:49.296338 521487 pod_ready.go:39] duration metric: took 5m30.282329345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0319 19:11:49.296356 521487 api_server.go:52] waiting for apiserver process to appear ...
I0319 19:11:49.296404 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0319 19:11:49.296474 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0319 19:11:49.359231 521487 cri.go:89] found id: "4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b"
I0319 19:11:49.359249 521487 cri.go:89] found id: "b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:11:49.359254 521487 cri.go:89] found id: ""
I0319 19:11:49.359262 521487 logs.go:282] 2 containers: [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0]
I0319 19:11:49.359322 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.363487 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.367117 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0319 19:11:49.367188 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0319 19:11:49.432706 521487 cri.go:89] found id: "9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432"
I0319 19:11:49.432726 521487 cri.go:89] found id: "590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:11:49.432730 521487 cri.go:89] found id: ""
I0319 19:11:49.432738 521487 logs.go:282] 2 containers: [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432 590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe]
I0319 19:11:49.432795 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.436691 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.441989 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0319 19:11:49.442059 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0319 19:11:49.512212 521487 cri.go:89] found id: "bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9"
I0319 19:11:49.512229 521487 cri.go:89] found id: "45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:11:49.512234 521487 cri.go:89] found id: ""
I0319 19:11:49.512241 521487 logs.go:282] 2 containers: [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9 45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a]
I0319 19:11:49.512297 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.516172 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.519834 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0319 19:11:49.519912 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0319 19:11:49.569434 521487 cri.go:89] found id: "c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0"
I0319 19:11:49.569507 521487 cri.go:89] found id: "49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:11:49.569530 521487 cri.go:89] found id: ""
I0319 19:11:49.569551 521487 logs.go:282] 2 containers: [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0 49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4]
I0319 19:11:49.569649 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.573889 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.578263 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0319 19:11:49.578387 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0319 19:11:49.631283 521487 cri.go:89] found id: "5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587"
I0319 19:11:49.631344 521487 cri.go:89] found id: "3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:11:49.631372 521487 cri.go:89] found id: ""
I0319 19:11:49.631393 521487 logs.go:282] 2 containers: [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587 3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c]
I0319 19:11:49.631479 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.635503 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.639502 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0319 19:11:49.639633 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0319 19:11:49.693635 521487 cri.go:89] found id: "06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2"
I0319 19:11:49.693705 521487 cri.go:89] found id: "df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:11:49.693739 521487 cri.go:89] found id: ""
I0319 19:11:49.693766 521487 logs.go:282] 2 containers: [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2 df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc]
I0319 19:11:49.693854 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.698373 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.702396 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0319 19:11:49.702473 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0319 19:11:49.753286 521487 cri.go:89] found id: "c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934"
I0319 19:11:49.753310 521487 cri.go:89] found id: "ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:11:49.753316 521487 cri.go:89] found id: ""
I0319 19:11:49.753323 521487 logs.go:282] 2 containers: [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934 ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657]
I0319 19:11:49.753381 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.757668 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.761347 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0319 19:11:49.761429 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0319 19:11:49.810357 521487 cri.go:89] found id: "8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc"
I0319 19:11:49.810379 521487 cri.go:89] found id: "f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:11:49.810384 521487 cri.go:89] found id: ""
I0319 19:11:49.810392 521487 logs.go:282] 2 containers: [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec]
I0319 19:11:49.810449 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.814612 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.818351 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0319 19:11:49.818418 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0319 19:11:49.887489 521487 cri.go:89] found id: "12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8"
I0319 19:11:49.887513 521487 cri.go:89] found id: ""
I0319 19:11:49.887522 521487 logs.go:282] 1 containers: [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8]
I0319 19:11:49.887590 521487 ssh_runner.go:195] Run: which crictl
I0319 19:11:49.891950 521487 logs.go:123] Gathering logs for kube-proxy [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587] ...
I0319 19:11:49.891975 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587"
I0319 19:11:48.061818 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:50.561328 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:49.947849 521487 logs.go:123] Gathering logs for kube-proxy [3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c] ...
I0319 19:11:49.947880 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:11:50.004674 521487 logs.go:123] Gathering logs for kube-controller-manager [df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc] ...
I0319 19:11:50.004704 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:11:50.081391 521487 logs.go:123] Gathering logs for containerd ...
I0319 19:11:50.081431 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0319 19:11:50.165806 521487 logs.go:123] Gathering logs for kubelet ...
I0319 19:11:50.165855 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0319 19:11:50.254447 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.870626 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-gnznl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-gnznl" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.254700 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.872945 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.254952 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.873232 662 reflector.go:138] object-"kube-system"/"coredns-token-mptx4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mptx4" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.255208 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.880644 662 reflector.go:138] object-"kube-system"/"kindnet-token-w7f6q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-w7f6q" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.255472 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.881550 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-wx6lx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-wx6lx" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.255727 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.884701 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.260319 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:19 old-k8s-version-908523 kubelet[662]: E0319 19:06:19.002056 662 reflector.go:138] object-"default"/"default-token-2xksl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2xksl" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.260613 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:19 old-k8s-version-908523 kubelet[662]: E0319 19:06:19.003962 662 reflector.go:138] object-"kube-system"/"metrics-server-token-rqzd4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-rqzd4" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.269670 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:20 old-k8s-version-908523 kubelet[662]: E0319 19:06:20.811808 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.270149 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:21 old-k8s-version-908523 kubelet[662]: E0319 19:06:21.265120 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.274455 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:37 old-k8s-version-908523 kubelet[662]: E0319 19:06:37.104000 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.274903 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:37 old-k8s-version-908523 kubelet[662]: E0319 19:06:37.880129 662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-hd7qz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-hd7qz" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:11:50.276958 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:49 old-k8s-version-908523 kubelet[662]: E0319 19:06:49.389186 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.277343 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:50 old-k8s-version-908523 kubelet[662]: E0319 19:06:50.396880 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.277901 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:51 old-k8s-version-908523 kubelet[662]: E0319 19:06:51.091415 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.278239 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:57 old-k8s-version-908523 kubelet[662]: E0319 19:06:57.352720 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.281268 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:02 old-k8s-version-908523 kubelet[662]: E0319 19:07:02.111155 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.281911 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:10 old-k8s-version-908523 kubelet[662]: E0319 19:07:10.454369 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.282098 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:17 old-k8s-version-908523 kubelet[662]: E0319 19:07:17.091490 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.282430 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:17 old-k8s-version-908523 kubelet[662]: E0319 19:07:17.352786 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.282616 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:28 old-k8s-version-908523 kubelet[662]: E0319 19:07:28.091504 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.282944 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:29 old-k8s-version-908523 kubelet[662]: E0319 19:07:29.090789 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.283534 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:41 old-k8s-version-908523 kubelet[662]: E0319 19:07:41.541724 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.283723 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:42 old-k8s-version-908523 kubelet[662]: E0319 19:07:42.092835 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.284083 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:47 old-k8s-version-908523 kubelet[662]: E0319 19:07:47.352704 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.286693 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:54 old-k8s-version-908523 kubelet[662]: E0319 19:07:54.103301 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.287026 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:01 old-k8s-version-908523 kubelet[662]: E0319 19:08:01.090920 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.287212 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:08 old-k8s-version-908523 kubelet[662]: E0319 19:08:08.094609 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.287542 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:15 old-k8s-version-908523 kubelet[662]: E0319 19:08:15.090816 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.287759 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:21 old-k8s-version-908523 kubelet[662]: E0319 19:08:21.091743 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.288365 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:28 old-k8s-version-908523 kubelet[662]: E0319 19:08:28.691752 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.288590 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:33 old-k8s-version-908523 kubelet[662]: E0319 19:08:33.091096 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.288949 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:37 old-k8s-version-908523 kubelet[662]: E0319 19:08:37.352894 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.289141 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:46 old-k8s-version-908523 kubelet[662]: E0319 19:08:46.091350 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.289470 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:49 old-k8s-version-908523 kubelet[662]: E0319 19:08:49.090767 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.289716 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:57 old-k8s-version-908523 kubelet[662]: E0319 19:08:57.091105 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.290048 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:03 old-k8s-version-908523 kubelet[662]: E0319 19:09:03.090799 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.290235 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:09 old-k8s-version-908523 kubelet[662]: E0319 19:09:09.091236 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.290564 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:14 old-k8s-version-908523 kubelet[662]: E0319 19:09:14.093204 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.293087 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:24 old-k8s-version-908523 kubelet[662]: E0319 19:09:24.099584 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:11:50.293455 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:29 old-k8s-version-908523 kubelet[662]: E0319 19:09:29.090926 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.293643 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:39 old-k8s-version-908523 kubelet[662]: E0319 19:09:39.091384 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.294012 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:42 old-k8s-version-908523 kubelet[662]: E0319 19:09:42.091266 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.294263 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:52 old-k8s-version-908523 kubelet[662]: E0319 19:09:52.095436 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.294857 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:55 old-k8s-version-908523 kubelet[662]: E0319 19:09:55.921112 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.295208 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:57 old-k8s-version-908523 kubelet[662]: E0319 19:09:57.353126 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.295397 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:07 old-k8s-version-908523 kubelet[662]: E0319 19:10:07.091358 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.295738 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:12 old-k8s-version-908523 kubelet[662]: E0319 19:10:12.098552 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.295927 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:21 old-k8s-version-908523 kubelet[662]: E0319 19:10:21.092107 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.296296 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:25 old-k8s-version-908523 kubelet[662]: E0319 19:10:25.090826 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.296492 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:32 old-k8s-version-908523 kubelet[662]: E0319 19:10:32.096004 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.296949 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:37 old-k8s-version-908523 kubelet[662]: E0319 19:10:37.090777 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.297138 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:43 old-k8s-version-908523 kubelet[662]: E0319 19:10:43.091133 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.297470 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:52 old-k8s-version-908523 kubelet[662]: E0319 19:10:52.095231 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.297656 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:54 old-k8s-version-908523 kubelet[662]: E0319 19:10:54.091203 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.297841 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:06 old-k8s-version-908523 kubelet[662]: E0319 19:11:06.092747 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.298170 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:07 old-k8s-version-908523 kubelet[662]: E0319 19:11:07.090893 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.298502 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:19 old-k8s-version-908523 kubelet[662]: E0319 19:11:19.091499 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.298714 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:21 old-k8s-version-908523 kubelet[662]: E0319 19:11:21.091450 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.299067 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: E0319 19:11:30.091870 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.299256 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:50.299643 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:50.299861 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0319 19:11:50.299875 521487 logs.go:123] Gathering logs for describe nodes ...
I0319 19:11:50.299889 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0319 19:11:50.519556 521487 logs.go:123] Gathering logs for etcd [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432] ...
I0319 19:11:50.519647 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432"
I0319 19:11:50.577633 521487 logs.go:123] Gathering logs for coredns [45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a] ...
I0319 19:11:50.577707 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:11:50.644683 521487 logs.go:123] Gathering logs for kube-scheduler [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0] ...
I0319 19:11:50.644776 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0"
I0319 19:11:50.708784 521487 logs.go:123] Gathering logs for kube-scheduler [49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4] ...
I0319 19:11:50.708924 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:11:50.783758 521487 logs.go:123] Gathering logs for kubernetes-dashboard [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8] ...
I0319 19:11:50.783841 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8"
I0319 19:11:50.840618 521487 logs.go:123] Gathering logs for kube-apiserver [b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0] ...
I0319 19:11:50.840704 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:11:50.936153 521487 logs.go:123] Gathering logs for kube-controller-manager [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2] ...
I0319 19:11:50.936263 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2"
I0319 19:11:51.014675 521487 logs.go:123] Gathering logs for kindnet [ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657] ...
I0319 19:11:51.014818 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:11:51.075839 521487 logs.go:123] Gathering logs for storage-provisioner [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc] ...
I0319 19:11:51.075938 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc"
I0319 19:11:51.144397 521487 logs.go:123] Gathering logs for container status ...
I0319 19:11:51.144503 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0319 19:11:51.224616 521487 logs.go:123] Gathering logs for dmesg ...
I0319 19:11:51.224695 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0319 19:11:51.246141 521487 logs.go:123] Gathering logs for kube-apiserver [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b] ...
I0319 19:11:51.246220 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b"
I0319 19:11:51.343408 521487 logs.go:123] Gathering logs for etcd [590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe] ...
I0319 19:11:51.343492 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:11:51.414566 521487 logs.go:123] Gathering logs for coredns [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9] ...
I0319 19:11:51.414644 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9"
I0319 19:11:51.466923 521487 logs.go:123] Gathering logs for kindnet [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934] ...
I0319 19:11:51.466996 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934"
I0319 19:11:51.545942 521487 logs.go:123] Gathering logs for storage-provisioner [f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec] ...
I0319 19:11:51.546017 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:11:51.666420 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:11:51.666495 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0319 19:11:51.666575 521487 out.go:270] X Problems detected in kubelet:
W0319 19:11:51.666625 521487 out.go:270] Mar 19 19:11:21 old-k8s-version-908523 kubelet[662]: E0319 19:11:21.091450 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:51.666676 521487 out.go:270] Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: E0319 19:11:30.091870 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:51.666732 521487 out.go:270] Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:11:51.666769 521487 out.go:270] Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:11:51.666818 521487 out.go:270] Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0319 19:11:51.666848 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:11:51.666888 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 19:11:52.562971 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:55.061128 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:57.559626 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:11:59.560216 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:12:01.670806 521487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0319 19:12:01.688874 521487 api_server.go:72] duration metric: took 5m59.473606709s to wait for apiserver process to appear ...
I0319 19:12:01.688897 521487 api_server.go:88] waiting for apiserver healthz status ...
I0319 19:12:01.688933 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0319 19:12:01.688992 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0319 19:12:01.740452 521487 cri.go:89] found id: "4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b"
I0319 19:12:01.740472 521487 cri.go:89] found id: "b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:12:01.740477 521487 cri.go:89] found id: ""
I0319 19:12:01.740485 521487 logs.go:282] 2 containers: [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0]
I0319 19:12:01.740638 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.744921 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.748804 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0319 19:12:01.748898 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0319 19:12:01.792172 521487 cri.go:89] found id: "9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432"
I0319 19:12:01.792197 521487 cri.go:89] found id: "590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:12:01.792202 521487 cri.go:89] found id: ""
I0319 19:12:01.792212 521487 logs.go:282] 2 containers: [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432 590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe]
I0319 19:12:01.792275 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.796223 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.800123 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0319 19:12:01.800200 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0319 19:12:01.847507 521487 cri.go:89] found id: "bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9"
I0319 19:12:01.847533 521487 cri.go:89] found id: "45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:12:01.847538 521487 cri.go:89] found id: ""
I0319 19:12:01.847546 521487 logs.go:282] 2 containers: [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9 45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a]
I0319 19:12:01.847606 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.852014 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.856137 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0319 19:12:01.856221 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0319 19:12:01.909386 521487 cri.go:89] found id: "c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0"
I0319 19:12:01.909413 521487 cri.go:89] found id: "49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:12:01.909419 521487 cri.go:89] found id: ""
I0319 19:12:01.909426 521487 logs.go:282] 2 containers: [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0 49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4]
I0319 19:12:01.909487 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.913873 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.917717 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0319 19:12:01.917798 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0319 19:12:01.959483 521487 cri.go:89] found id: "5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587"
I0319 19:12:01.959560 521487 cri.go:89] found id: "3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:12:01.959597 521487 cri.go:89] found id: ""
I0319 19:12:01.959631 521487 logs.go:282] 2 containers: [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587 3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c]
I0319 19:12:01.959724 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.963532 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:01.967259 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0319 19:12:01.967374 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0319 19:12:02.015881 521487 cri.go:89] found id: "06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2"
I0319 19:12:02.015909 521487 cri.go:89] found id: "df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:12:02.015916 521487 cri.go:89] found id: ""
I0319 19:12:02.015924 521487 logs.go:282] 2 containers: [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2 df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc]
I0319 19:12:02.015984 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.020052 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.023598 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0319 19:12:02.023687 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0319 19:12:02.074970 521487 cri.go:89] found id: "c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934"
I0319 19:12:02.074998 521487 cri.go:89] found id: "ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:12:02.075003 521487 cri.go:89] found id: ""
I0319 19:12:02.075012 521487 logs.go:282] 2 containers: [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934 ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657]
I0319 19:12:02.075079 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.079064 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.083205 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0319 19:12:02.083295 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0319 19:12:02.141008 521487 cri.go:89] found id: "12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8"
I0319 19:12:02.141039 521487 cri.go:89] found id: ""
I0319 19:12:02.141048 521487 logs.go:282] 1 containers: [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8]
I0319 19:12:02.141114 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.145062 521487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0319 19:12:02.145159 521487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0319 19:12:02.186897 521487 cri.go:89] found id: "8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc"
I0319 19:12:02.186924 521487 cri.go:89] found id: "f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:12:02.186929 521487 cri.go:89] found id: ""
I0319 19:12:02.186937 521487 logs.go:282] 2 containers: [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec]
I0319 19:12:02.186996 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.190758 521487 ssh_runner.go:195] Run: which crictl
I0319 19:12:02.194646 521487 logs.go:123] Gathering logs for kube-apiserver [b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0] ...
I0319 19:12:02.194729 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0"
I0319 19:12:02.260079 521487 logs.go:123] Gathering logs for etcd [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432] ...
I0319 19:12:02.260119 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432"
I0319 19:12:02.308149 521487 logs.go:123] Gathering logs for coredns [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9] ...
I0319 19:12:02.308179 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9"
I0319 19:12:02.355515 521487 logs.go:123] Gathering logs for coredns [45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a] ...
I0319 19:12:02.355541 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a"
I0319 19:12:02.395216 521487 logs.go:123] Gathering logs for kube-controller-manager [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2] ...
I0319 19:12:02.395248 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2"
I0319 19:12:02.481469 521487 logs.go:123] Gathering logs for storage-provisioner [f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec] ...
I0319 19:12:02.481507 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec"
I0319 19:12:02.528694 521487 logs.go:123] Gathering logs for containerd ...
I0319 19:12:02.528723 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0319 19:12:02.582259 521487 logs.go:123] Gathering logs for describe nodes ...
I0319 19:12:02.582296 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0319 19:12:02.730960 521487 logs.go:123] Gathering logs for kube-scheduler [49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4] ...
I0319 19:12:02.730996 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4"
I0319 19:12:02.773486 521487 logs.go:123] Gathering logs for kube-proxy [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587] ...
I0319 19:12:02.773521 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587"
I0319 19:12:02.833819 521487 logs.go:123] Gathering logs for kindnet [ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657] ...
I0319 19:12:02.833847 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657"
I0319 19:12:02.892073 521487 logs.go:123] Gathering logs for container status ...
I0319 19:12:02.892103 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0319 19:12:02.947033 521487 logs.go:123] Gathering logs for dmesg ...
I0319 19:12:02.947064 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0319 19:12:02.964937 521487 logs.go:123] Gathering logs for kindnet [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934] ...
I0319 19:12:02.964964 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934"
I0319 19:12:03.025385 521487 logs.go:123] Gathering logs for storage-provisioner [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc] ...
I0319 19:12:03.025414 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc"
I0319 19:12:03.078094 521487 logs.go:123] Gathering logs for kubelet ...
I0319 19:12:03.078119 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0319 19:12:03.127371 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.870626 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-gnznl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-gnznl" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.127594 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.872945 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.127813 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.873232 662 reflector.go:138] object-"kube-system"/"coredns-token-mptx4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mptx4" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.128025 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.880644 662 reflector.go:138] object-"kube-system"/"kindnet-token-w7f6q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-w7f6q" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.128239 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.881550 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-wx6lx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-wx6lx" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.128443 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:18 old-k8s-version-908523 kubelet[662]: E0319 19:06:18.884701 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.132335 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:19 old-k8s-version-908523 kubelet[662]: E0319 19:06:19.002056 662 reflector.go:138] object-"default"/"default-token-2xksl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2xksl" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.132630 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:19 old-k8s-version-908523 kubelet[662]: E0319 19:06:19.003962 662 reflector.go:138] object-"kube-system"/"metrics-server-token-rqzd4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-rqzd4" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.140667 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:20 old-k8s-version-908523 kubelet[662]: E0319 19:06:20.811808 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.141069 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:21 old-k8s-version-908523 kubelet[662]: E0319 19:06:21.265120 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.144417 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:37 old-k8s-version-908523 kubelet[662]: E0319 19:06:37.104000 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.144926 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:37 old-k8s-version-908523 kubelet[662]: E0319 19:06:37.880129 662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-hd7qz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-hd7qz" is forbidden: User "system:node:old-k8s-version-908523" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-908523' and this object
W0319 19:12:03.146826 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:49 old-k8s-version-908523 kubelet[662]: E0319 19:06:49.389186 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.147155 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:50 old-k8s-version-908523 kubelet[662]: E0319 19:06:50.396880 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.147676 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:51 old-k8s-version-908523 kubelet[662]: E0319 19:06:51.091415 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.148009 521487 logs.go:138] Found kubelet problem: Mar 19 19:06:57 old-k8s-version-908523 kubelet[662]: E0319 19:06:57.352720 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.150826 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:02 old-k8s-version-908523 kubelet[662]: E0319 19:07:02.111155 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.151421 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:10 old-k8s-version-908523 kubelet[662]: E0319 19:07:10.454369 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.151606 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:17 old-k8s-version-908523 kubelet[662]: E0319 19:07:17.091490 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.151942 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:17 old-k8s-version-908523 kubelet[662]: E0319 19:07:17.352786 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.152127 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:28 old-k8s-version-908523 kubelet[662]: E0319 19:07:28.091504 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.152453 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:29 old-k8s-version-908523 kubelet[662]: E0319 19:07:29.090789 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.153044 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:41 old-k8s-version-908523 kubelet[662]: E0319 19:07:41.541724 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.153230 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:42 old-k8s-version-908523 kubelet[662]: E0319 19:07:42.092835 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.153591 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:47 old-k8s-version-908523 kubelet[662]: E0319 19:07:47.352704 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.156049 521487 logs.go:138] Found kubelet problem: Mar 19 19:07:54 old-k8s-version-908523 kubelet[662]: E0319 19:07:54.103301 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.156375 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:01 old-k8s-version-908523 kubelet[662]: E0319 19:08:01.090920 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.156568 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:08 old-k8s-version-908523 kubelet[662]: E0319 19:08:08.094609 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.156895 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:15 old-k8s-version-908523 kubelet[662]: E0319 19:08:15.090816 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.157078 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:21 old-k8s-version-908523 kubelet[662]: E0319 19:08:21.091743 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.157664 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:28 old-k8s-version-908523 kubelet[662]: E0319 19:08:28.691752 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.157849 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:33 old-k8s-version-908523 kubelet[662]: E0319 19:08:33.091096 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.158177 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:37 old-k8s-version-908523 kubelet[662]: E0319 19:08:37.352894 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.158362 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:46 old-k8s-version-908523 kubelet[662]: E0319 19:08:46.091350 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.158689 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:49 old-k8s-version-908523 kubelet[662]: E0319 19:08:49.090767 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.158929 521487 logs.go:138] Found kubelet problem: Mar 19 19:08:57 old-k8s-version-908523 kubelet[662]: E0319 19:08:57.091105 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.159257 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:03 old-k8s-version-908523 kubelet[662]: E0319 19:09:03.090799 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.159441 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:09 old-k8s-version-908523 kubelet[662]: E0319 19:09:09.091236 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.159771 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:14 old-k8s-version-908523 kubelet[662]: E0319 19:09:14.093204 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.162205 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:24 old-k8s-version-908523 kubelet[662]: E0319 19:09:24.099584 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0319 19:12:03.162533 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:29 old-k8s-version-908523 kubelet[662]: E0319 19:09:29.090926 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.162717 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:39 old-k8s-version-908523 kubelet[662]: E0319 19:09:39.091384 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.163044 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:42 old-k8s-version-908523 kubelet[662]: E0319 19:09:42.091266 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.163227 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:52 old-k8s-version-908523 kubelet[662]: E0319 19:09:52.095436 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.163817 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:55 old-k8s-version-908523 kubelet[662]: E0319 19:09:55.921112 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.164142 521487 logs.go:138] Found kubelet problem: Mar 19 19:09:57 old-k8s-version-908523 kubelet[662]: E0319 19:09:57.353126 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.164325 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:07 old-k8s-version-908523 kubelet[662]: E0319 19:10:07.091358 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.164654 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:12 old-k8s-version-908523 kubelet[662]: E0319 19:10:12.098552 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.164838 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:21 old-k8s-version-908523 kubelet[662]: E0319 19:10:21.092107 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.165187 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:25 old-k8s-version-908523 kubelet[662]: E0319 19:10:25.090826 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.165376 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:32 old-k8s-version-908523 kubelet[662]: E0319 19:10:32.096004 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.165701 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:37 old-k8s-version-908523 kubelet[662]: E0319 19:10:37.090777 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.165885 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:43 old-k8s-version-908523 kubelet[662]: E0319 19:10:43.091133 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.166214 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:52 old-k8s-version-908523 kubelet[662]: E0319 19:10:52.095231 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.166397 521487 logs.go:138] Found kubelet problem: Mar 19 19:10:54 old-k8s-version-908523 kubelet[662]: E0319 19:10:54.091203 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.166580 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:06 old-k8s-version-908523 kubelet[662]: E0319 19:11:06.092747 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.166912 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:07 old-k8s-version-908523 kubelet[662]: E0319 19:11:07.090893 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.167238 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:19 old-k8s-version-908523 kubelet[662]: E0319 19:11:19.091499 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.167421 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:21 old-k8s-version-908523 kubelet[662]: E0319 19:11:21.091450 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.167750 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: E0319 19:11:30.091870 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.167938 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.168264 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.168447 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.168778 521487 logs.go:138] Found kubelet problem: Mar 19 19:11:58 old-k8s-version-908523 kubelet[662]: E0319 19:11:58.090729 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.168962 521487 logs.go:138] Found kubelet problem: Mar 19 19:12:02 old-k8s-version-908523 kubelet[662]: E0319 19:12:02.093089 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0319 19:12:03.168973 521487 logs.go:123] Gathering logs for etcd [590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe] ...
I0319 19:12:03.168988 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe"
I0319 19:12:03.214290 521487 logs.go:123] Gathering logs for kube-scheduler [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0] ...
I0319 19:12:03.214322 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0"
I0319 19:12:03.266468 521487 logs.go:123] Gathering logs for kube-proxy [3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c] ...
I0319 19:12:03.266496 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c"
I0319 19:12:03.313812 521487 logs.go:123] Gathering logs for kube-controller-manager [df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc] ...
I0319 19:12:03.313838 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc"
I0319 19:12:03.383425 521487 logs.go:123] Gathering logs for kubernetes-dashboard [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8] ...
I0319 19:12:03.383465 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8"
I0319 19:12:03.427288 521487 logs.go:123] Gathering logs for kube-apiserver [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b] ...
I0319 19:12:03.427317 521487 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b"
I0319 19:12:03.503630 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:12:03.503664 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0319 19:12:03.503734 521487 out.go:270] X Problems detected in kubelet:
W0319 19:12:03.503749 521487 out.go:270] Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.503756 521487 out.go:270] Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.503767 521487 out.go:270] Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0319 19:12:03.503776 521487 out.go:270] Mar 19 19:11:58 old-k8s-version-908523 kubelet[662]: E0319 19:11:58.090729 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
W0319 19:12:03.503833 521487 out.go:270] Mar 19 19:12:02 old-k8s-version-908523 kubelet[662]: E0319 19:12:02.093089 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0319 19:12:03.503840 521487 out.go:358] Setting ErrFile to fd 2...
I0319 19:12:03.503858 521487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 19:12:01.561262 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:12:03.562892 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:12:06.061334 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:12:08.560544 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:12:10.561457 531208 pod_ready.go:103] pod "metrics-server-f79f97bbb-btrsq" in "kube-system" namespace has status "Ready":"False"
I0319 19:12:13.505192 521487 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0319 19:12:13.516184 521487 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0319 19:12:13.517722 521487 out.go:201]
W0319 19:12:13.519090 521487 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0319 19:12:13.519125 521487 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0319 19:12:13.519144 521487 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0319 19:12:13.519150 521487 out.go:270] *
W0319 19:12:13.520057 521487 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0319 19:12:13.521041 521487 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
2b52ca5624f89 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 605faebb6e21e dashboard-metrics-scraper-8d5bb5db8-rk5mc
12acba9ec1365 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 f4e6a9c1a0109 kubernetes-dashboard-cd95d586-fm22x
bd3a5c1511c00 db91994f4ee8f 5 minutes ago Running coredns 1 f8cfff7aa5813 coredns-74ff55c5b-xmp7g
cd1d3ef417902 1611cd07b61d5 5 minutes ago Running busybox 1 21d87192719b2 busybox
8498ea3c3e6bb ba04bb24b9575 5 minutes ago Running storage-provisioner 1 fe721c0ebe295 storage-provisioner
5d364d4ad6950 25a5233254979 5 minutes ago Running kube-proxy 1 ff473cdb980f6 kube-proxy-scv6d
c3fef602b9793 ee75e27fff91c 5 minutes ago Running kindnet-cni 1 85b18aa9179ea kindnet-vngff
9ec0fb004ae5a 05b738aa1bc63 6 minutes ago Running etcd 1 17b5282e6c5d3 etcd-old-k8s-version-908523
4d1aaa3d9a844 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 a4e3cd5162d78 kube-apiserver-old-k8s-version-908523
c40d1e75ce01e e7605f88f17d6 6 minutes ago Running kube-scheduler 1 7dc9ee0e8d557 kube-scheduler-old-k8s-version-908523
06600ca8debc4 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 81ac719570915 kube-controller-manager-old-k8s-version-908523
add06f7a44b4e 1611cd07b61d5 6 minutes ago Exited busybox 0 9cfad425a51eb busybox
45c54ebb5c63b db91994f4ee8f 7 minutes ago Exited coredns 0 5417ff082f18d coredns-74ff55c5b-xmp7g
ac9f9f84272d1 ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 1c59317221a77 kindnet-vngff
f8ba5fb2a86cb ba04bb24b9575 8 minutes ago Exited storage-provisioner 0 cf3681c80bb60 storage-provisioner
3814e7a2741d0 25a5233254979 8 minutes ago Exited kube-proxy 0 b0c561b50ec7c kube-proxy-scv6d
49e9e012cc1ec e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 a3912100334c1 kube-scheduler-old-k8s-version-908523
590bcd24dc890 05b738aa1bc63 8 minutes ago Exited etcd 0 f3a7623fded75 etcd-old-k8s-version-908523
b494110f79e60 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 b43a75ba5e8a9 kube-apiserver-old-k8s-version-908523
df7c21410204e 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 443e4b0e83617 kube-controller-manager-old-k8s-version-908523
==> containerd <==
Mar 19 19:08:28 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:08:28.209882418Z" level=info msg="received exit event container_id:\"8fb4f46011d4bd4f05cb2ababac434c63634714ef31e6d63a915d686bb07657d\" id:\"8fb4f46011d4bd4f05cb2ababac434c63634714ef31e6d63a915d686bb07657d\" pid:2964 exit_status:255 exited_at:{seconds:1742411308 nanos:209358629}"
Mar 19 19:08:28 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:08:28.209950792Z" level=info msg="StartContainer for \"8fb4f46011d4bd4f05cb2ababac434c63634714ef31e6d63a915d686bb07657d\" returns successfully"
Mar 19 19:08:28 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:08:28.241067097Z" level=info msg="shim disconnected" id=8fb4f46011d4bd4f05cb2ababac434c63634714ef31e6d63a915d686bb07657d namespace=k8s.io
Mar 19 19:08:28 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:08:28.241128694Z" level=warning msg="cleaning up after shim disconnected" id=8fb4f46011d4bd4f05cb2ababac434c63634714ef31e6d63a915d686bb07657d namespace=k8s.io
Mar 19 19:08:28 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:08:28.241138573Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Mar 19 19:08:28 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:08:28.693319033Z" level=info msg="RemoveContainer for \"386077018af11659963b821cf2babbfdfda4fdd31681cc6520d54ae09d0226bf\""
Mar 19 19:08:28 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:08:28.697665898Z" level=info msg="RemoveContainer for \"386077018af11659963b821cf2babbfdfda4fdd31681cc6520d54ae09d0226bf\" returns successfully"
Mar 19 19:09:24 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:24.091832887Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:09:24 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:24.096709373Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Mar 19 19:09:24 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:24.099082175Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Mar 19 19:09:24 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:24.099126024Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.092979556Z" level=info msg="CreateContainer within sandbox \"605faebb6e21ea930a0a2fa0f62306de6b143720c53dd34eef1b081de23fee44\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.114107060Z" level=info msg="CreateContainer within sandbox \"605faebb6e21ea930a0a2fa0f62306de6b143720c53dd34eef1b081de23fee44\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821\""
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.114877492Z" level=info msg="StartContainer for \"2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821\""
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.181760282Z" level=info msg="StartContainer for \"2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821\" returns successfully"
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.184364865Z" level=info msg="received exit event container_id:\"2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821\" id:\"2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821\" pid:3220 exit_status:255 exited_at:{seconds:1742411395 nanos:184120390}"
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.211794008Z" level=info msg="shim disconnected" id=2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821 namespace=k8s.io
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.211859527Z" level=warning msg="cleaning up after shim disconnected" id=2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821 namespace=k8s.io
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.211873246Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.922899603Z" level=info msg="RemoveContainer for \"8fb4f46011d4bd4f05cb2ababac434c63634714ef31e6d63a915d686bb07657d\""
Mar 19 19:09:55 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:09:55.936861201Z" level=info msg="RemoveContainer for \"8fb4f46011d4bd4f05cb2ababac434c63634714ef31e6d63a915d686bb07657d\" returns successfully"
Mar 19 19:12:13 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:12:13.091649238Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:12:13 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:12:13.096523883Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Mar 19 19:12:13 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:12:13.097910126Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Mar 19 19:12:13 old-k8s-version-908523 containerd[570]: time="2025-03-19T19:12:13.097939558Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [45c54ebb5c63bcfab547ad76899089d70c1569f3306f5663cbd6341ddc8e8e1a] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:37637 - 12889 "HINFO IN 2963015163359454060.1537258451066557790. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027146463s
==> coredns [bd3a5c1511c0028c3bff86d9603f05c92464c2bfe224dbd9129e6b1c447622f9] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:41862 - 20603 "HINFO IN 6194431121337563674.1560059350806171673. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014585025s
==> describe nodes <==
Name: old-k8s-version-908523
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-908523
kubernetes.io/os=linux
minikube.k8s.io/commit=d76a625434f413a89ad1bb610dea10300ea9201f
minikube.k8s.io/name=old-k8s-version-908523
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_03_19T19_03_43_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 19 Mar 2025 19:03:39 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-908523
AcquireTime: <unset>
RenewTime: Wed, 19 Mar 2025 19:12:11 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 19 Mar 2025 19:07:19 +0000 Wed, 19 Mar 2025 19:03:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 19 Mar 2025 19:07:19 +0000 Wed, 19 Mar 2025 19:03:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 19 Mar 2025 19:07:19 +0000 Wed, 19 Mar 2025 19:03:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 19 Mar 2025 19:07:19 +0000 Wed, 19 Mar 2025 19:03:58 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-908523
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: f8a9fa2d706c45b9b6b2ba44e7fc72c8
System UUID: 7bca3c75-a554-405d-b9d4-23670edf0ad7
Boot ID: 740cce80-1f77-45f1-b1a3-ff36876cad2e
Kernel Version: 5.15.0-1077-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.25
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m45s
kube-system coredns-74ff55c5b-xmp7g 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m17s
kube-system etcd-old-k8s-version-908523 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m24s
kube-system kindnet-vngff 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m17s
kube-system kube-apiserver-old-k8s-version-908523 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kube-system kube-controller-manager-old-k8s-version-908523 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kube-system kube-proxy-scv6d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m17s
kube-system kube-scheduler-old-k8s-version-908523 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kube-system metrics-server-9975d5f86-rls8x 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m33s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m14s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-rk5mc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m38s
kubernetes-dashboard kubernetes-dashboard-cd95d586-fm22x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m38s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m44s (x4 over 8m44s) kubelet Node old-k8s-version-908523 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m44s (x4 over 8m44s) kubelet Node old-k8s-version-908523 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m44s (x4 over 8m44s) kubelet Node old-k8s-version-908523 status is now: NodeHasSufficientPID
Normal Starting 8m25s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m24s kubelet Node old-k8s-version-908523 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m24s kubelet Node old-k8s-version-908523 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m24s kubelet Node old-k8s-version-908523 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m24s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m17s kubelet Node old-k8s-version-908523 status is now: NodeReady
Normal Starting 8m15s kube-proxy Starting kube-proxy.
Normal Starting 6m5s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m5s (x8 over 6m5s) kubelet Node old-k8s-version-908523 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m5s (x8 over 6m5s) kubelet Node old-k8s-version-908523 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m5s (x7 over 6m5s) kubelet Node old-k8s-version-908523 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m5s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m54s kube-proxy Starting kube-proxy.
==> dmesg <==
[Mar19 17:46] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
==> etcd [590bcd24dc8906e0e75cd67ff010ec87bc024c2ad65a7bdb440e6aac3346eefe] <==
raft2025/03/19 19:03:32 INFO: 9f0758e1c58a86ed is starting a new election at term 1
raft2025/03/19 19:03:32 INFO: 9f0758e1c58a86ed became candidate at term 2
raft2025/03/19 19:03:32 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
raft2025/03/19 19:03:32 INFO: 9f0758e1c58a86ed became leader at term 2
raft2025/03/19 19:03:32 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2025-03-19 19:03:32.454032 I | etcdserver: setting up the initial cluster version to 3.4
2025-03-19 19:03:32.456727 N | etcdserver/membership: set the initial cluster version to 3.4
2025-03-19 19:03:32.456907 I | etcdserver/api: enabled capabilities for version 3.4
2025-03-19 19:03:32.457004 I | etcdserver: published {Name:old-k8s-version-908523 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2025-03-19 19:03:32.457103 I | embed: ready to serve client requests
2025-03-19 19:03:32.458434 I | embed: serving client requests on 127.0.0.1:2379
2025-03-19 19:03:32.459751 I | embed: ready to serve client requests
2025-03-19 19:03:32.463287 I | embed: serving client requests on 192.168.85.2:2379
2025-03-19 19:03:53.760271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:03:58.195428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:04:08.190147 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:04:18.190416 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:04:28.190441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:04:38.190188 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:04:48.197358 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:04:58.190355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:05:08.190216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:05:18.190283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:05:28.190332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:05:38.190074 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [9ec0fb004ae5a8d20a43ab45b65c3a8156ce87f5ccfab96e2918e241a7c87432] <==
2025-03-19 19:08:08.821977 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:08:18.821762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:08:28.821754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:08:38.821806 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:08:48.821809 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:08:58.821753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:09:08.821795 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:09:18.821760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:09:28.821647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:09:38.821857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:09:48.821839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:09:58.821842 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:10:08.821796 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:10:18.823778 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:10:28.821939 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:10:38.821800 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:10:48.821829 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:10:58.821831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:11:08.821882 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:11:18.822396 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:11:28.821921 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:11:38.821819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:11:48.821809 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:11:58.821930 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-19 19:12:08.821744 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
19:12:15 up 2:54, 0 users, load average: 2.43, 2.28, 2.54
Linux old-k8s-version-908523 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [ac9f9f84272d131b80427eead390b747a75fe32eeabf88d06483293f44efc657] <==
I0319 19:04:03.697231 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I0319 19:04:03.998233 1 controller.go:361] Starting controller kube-network-policies
I0319 19:04:03.998318 1 controller.go:365] Waiting for informer caches to sync
I0319 19:04:03.998349 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0319 19:04:04.201094 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0319 19:04:04.201118 1 metrics.go:61] Registering metrics
I0319 19:04:04.201167 1 controller.go:401] Syncing nftables rules
I0319 19:04:13.997785 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:04:13.997822 1 main.go:301] handling current node
I0319 19:04:23.999483 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:04:23.999867 1 main.go:301] handling current node
I0319 19:04:34.000927 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:04:34.000974 1 main.go:301] handling current node
I0319 19:04:44.005417 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:04:44.005517 1 main.go:301] handling current node
I0319 19:04:54.005278 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:04:54.005371 1 main.go:301] handling current node
I0319 19:05:03.998550 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:05:03.998628 1 main.go:301] handling current node
I0319 19:05:14.000963 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:05:14.001014 1 main.go:301] handling current node
I0319 19:05:23.997536 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:05:23.997603 1 main.go:301] handling current node
I0319 19:05:33.997502 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:05:33.997649 1 main.go:301] handling current node
==> kindnet [c3fef602b97932b84835630f9f35b36e7a5aa0df9aed9ae90d2346488dc8d934] <==
I0319 19:10:11.997868 1 main.go:301] handling current node
I0319 19:10:21.997235 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:10:21.997274 1 main.go:301] handling current node
I0319 19:10:31.997861 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:10:31.997985 1 main.go:301] handling current node
I0319 19:10:41.997796 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:10:41.997900 1 main.go:301] handling current node
I0319 19:10:51.997117 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:10:51.997150 1 main.go:301] handling current node
I0319 19:11:01.997782 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:11:01.997819 1 main.go:301] handling current node
I0319 19:11:11.997900 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:11:11.997938 1 main.go:301] handling current node
I0319 19:11:21.997860 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:11:21.997898 1 main.go:301] handling current node
I0319 19:11:31.998076 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:11:31.998360 1 main.go:301] handling current node
I0319 19:11:41.997824 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:11:41.997857 1 main.go:301] handling current node
I0319 19:11:51.997851 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:11:51.997888 1 main.go:301] handling current node
I0319 19:12:01.997787 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:12:01.997827 1 main.go:301] handling current node
I0319 19:12:11.997818 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0319 19:12:11.997854 1 main.go:301] handling current node
==> kube-apiserver [4d1aaa3d9a844db9de12fb2cd967fd1ae0abd14236bb49101afb10c0fa91153b] <==
I0319 19:08:54.322105 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:08:54.322115 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0319 19:09:21.509398 1 handler_proxy.go:102] no RequestInfo found in the context
E0319 19:09:21.509491 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0319 19:09:21.509501 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0319 19:09:25.101201 1 client.go:360] parsed scheme: "passthrough"
I0319 19:09:25.101443 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:09:25.101464 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0319 19:10:06.732051 1 client.go:360] parsed scheme: "passthrough"
I0319 19:10:06.732103 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:10:06.732289 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0319 19:10:39.901607 1 client.go:360] parsed scheme: "passthrough"
I0319 19:10:39.901655 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:10:39.901666 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0319 19:11:15.944036 1 client.go:360] parsed scheme: "passthrough"
I0319 19:11:15.944093 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:11:15.944102 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0319 19:11:19.990239 1 handler_proxy.go:102] no RequestInfo found in the context
E0319 19:11:19.990350 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0319 19:11:19.990364 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0319 19:11:52.628714 1 client.go:360] parsed scheme: "passthrough"
I0319 19:11:52.628762 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:11:52.628771 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [b494110f79e606500147391b3646bfcb92978952ee90eedecbdf906207991db0] <==
I0319 19:03:40.433352 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0319 19:03:40.433521 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0319 19:03:40.465224 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0319 19:03:40.469536 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0319 19:03:40.469704 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0319 19:03:40.936951 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0319 19:03:40.985796 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0319 19:03:41.085796 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
I0319 19:03:41.086899 1 controller.go:606] quota admission added evaluator for: endpoints
I0319 19:03:41.096703 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0319 19:03:42.089541 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0319 19:03:42.504820 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0319 19:03:42.554100 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0319 19:03:50.988507 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0319 19:03:58.100521 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0319 19:03:58.482855 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0319 19:04:06.041676 1 client.go:360] parsed scheme: "passthrough"
I0319 19:04:06.041723 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:04:06.041733 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0319 19:04:42.393073 1 client.go:360] parsed scheme: "passthrough"
I0319 19:04:42.393119 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:04:42.393129 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0319 19:05:14.559962 1 client.go:360] parsed scheme: "passthrough"
I0319 19:05:14.560029 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0319 19:05:14.560178 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [06600ca8debc4323be519259527c6920d83a6b5bfb6c25b281acfce64250e7d2] <==
E0319 19:08:09.430350 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0319 19:08:15.013190 1 request.go:655] Throttling request took 1.048371201s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0319 19:08:15.865083 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0319 19:08:39.944067 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0319 19:08:47.515808 1 request.go:655] Throttling request took 1.047930689s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W0319 19:08:48.367331 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0319 19:09:10.446016 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0319 19:09:20.017846 1 request.go:655] Throttling request took 1.048286201s, request: GET:https://192.168.85.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
W0319 19:09:20.871861 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0319 19:09:40.948050 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0319 19:09:52.522506 1 request.go:655] Throttling request took 1.048549291s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1?timeout=32s
W0319 19:09:53.374192 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0319 19:10:11.453360 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0319 19:10:25.024626 1 request.go:655] Throttling request took 1.048410154s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1?timeout=32s
W0319 19:10:25.878207 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0319 19:10:41.955286 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0319 19:10:57.529002 1 request.go:655] Throttling request took 1.048352454s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0319 19:10:58.380471 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0319 19:11:12.457832 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0319 19:11:30.030884 1 request.go:655] Throttling request took 1.047867526s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0319 19:11:30.882885 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0319 19:11:42.959715 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0319 19:12:02.533640 1 request.go:655] Throttling request took 1.048284732s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
W0319 19:12:03.385329 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0319 19:12:13.461622 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-controller-manager [df7c21410204e85eb39d90149b5ed0f5a8856ec32b53b35a6be2537ac16a9bfc] <==
I0319 19:03:58.314475 1 event.go:291] "Event occurred" object="old-k8s-version-908523" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-908523 event: Registered Node old-k8s-version-908523 in Controller"
I0319 19:03:58.416644 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0319 19:03:58.435884 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0319 19:03:58.482663 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0319 19:03:58.482713 1 shared_informer.go:247] Caches are synced for resource quota
I0319 19:03:58.486555 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-gffct"
I0319 19:03:58.497208 1 shared_informer.go:247] Caches are synced for resource quota
I0319 19:03:58.641912 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0319 19:03:58.686220 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-908523" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0319 19:03:58.708606 1 range_allocator.go:373] Set node old-k8s-version-908523 PodCIDR to [10.244.0.0/24]
E0319 19:03:58.715550 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0319 19:03:58.716375 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-xmp7g"
I0319 19:03:58.743311 1 shared_informer.go:247] Caches are synced for garbage collector
I0319 19:03:58.747183 1 shared_informer.go:247] Caches are synced for garbage collector
I0319 19:03:58.747198 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0319 19:03:58.747321 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-scv6d"
I0319 19:03:58.906421 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vngff"
I0319 19:04:01.754334 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0319 19:04:01.810921 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-gffct"
I0319 19:04:03.305911 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0319 19:05:41.799283 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0319 19:05:41.815051 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
E0319 19:05:41.845061 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
E0319 19:05:41.991334 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0319 19:05:42.011252 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [3814e7a2741d02ba1dcd41f4111e2e495848d216d43cf8053822c9041e24408c] <==
I0319 19:04:00.411669 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0319 19:04:00.411786 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0319 19:04:00.469114 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0319 19:04:00.469204 1 server_others.go:185] Using iptables Proxier.
I0319 19:04:00.469431 1 server.go:650] Version: v1.20.0
I0319 19:04:00.469929 1 config.go:315] Starting service config controller
I0319 19:04:00.469938 1 shared_informer.go:240] Waiting for caches to sync for service config
I0319 19:04:00.477513 1 config.go:224] Starting endpoint slice config controller
I0319 19:04:00.477537 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0319 19:04:00.570065 1 shared_informer.go:247] Caches are synced for service config
I0319 19:04:00.578634 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [5d364d4ad69506401718a8ae8dffe088a904901b8b1748f5affad99351eb7587] <==
I0319 19:06:21.559994 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0319 19:06:21.560075 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0319 19:06:21.582470 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0319 19:06:21.582588 1 server_others.go:185] Using iptables Proxier.
I0319 19:06:21.583071 1 server.go:650] Version: v1.20.0
I0319 19:06:21.583842 1 config.go:315] Starting service config controller
I0319 19:06:21.583864 1 shared_informer.go:240] Waiting for caches to sync for service config
I0319 19:06:21.583881 1 config.go:224] Starting endpoint slice config controller
I0319 19:06:21.583884 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0319 19:06:21.684051 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0319 19:06:21.684093 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [49e9e012cc1ecb6c03a240aa80a3ed464a9bde4ac8bf0675535a0d1bbb32ebc4] <==
I0319 19:03:34.676863 1 serving.go:331] Generated self-signed cert in-memory
W0319 19:03:39.611013 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0319 19:03:39.611247 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0319 19:03:39.611365 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0319 19:03:39.611469 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0319 19:03:39.696449 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0319 19:03:39.700650 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0319 19:03:39.701400 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0319 19:03:39.701568 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0319 19:03:39.702721 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0319 19:03:39.704231 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0319 19:03:39.704505 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0319 19:03:39.705998 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0319 19:03:39.706403 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0319 19:03:39.706720 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0319 19:03:39.713055 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0319 19:03:39.713527 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0319 19:03:39.713794 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0319 19:03:39.713981 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0319 19:03:39.714202 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0319 19:03:39.714334 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0319 19:03:40.576562 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0319 19:03:41.301693 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [c40d1e75ce01e76f3035570c55ac656cd3f9205f3d2d1d2cdb28ceb2d9566af0] <==
I0319 19:06:12.431536 1 serving.go:331] Generated self-signed cert in-memory
W0319 19:06:18.741785 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0319 19:06:18.741993 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0319 19:06:18.742770 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0319 19:06:18.742847 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0319 19:06:19.010407 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0319 19:06:19.010491 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0319 19:06:19.010498 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0319 19:06:19.010520 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0319 19:06:19.216479 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 19 19:10:43 old-k8s-version-908523 kubelet[662]: E0319 19:10:43.091133 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:10:52 old-k8s-version-908523 kubelet[662]: I0319 19:10:52.094839 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821
Mar 19 19:10:52 old-k8s-version-908523 kubelet[662]: E0319 19:10:52.095231 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:10:54 old-k8s-version-908523 kubelet[662]: E0319 19:10:54.091203 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:06 old-k8s-version-908523 kubelet[662]: E0319 19:11:06.092747 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:07 old-k8s-version-908523 kubelet[662]: I0319 19:11:07.090512 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821
Mar 19 19:11:07 old-k8s-version-908523 kubelet[662]: E0319 19:11:07.090893 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:11:19 old-k8s-version-908523 kubelet[662]: I0319 19:11:19.090570 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821
Mar 19 19:11:19 old-k8s-version-908523 kubelet[662]: E0319 19:11:19.091499 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:11:21 old-k8s-version-908523 kubelet[662]: E0319 19:11:21.091450 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: I0319 19:11:30.090963 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821
Mar 19 19:11:30 old-k8s-version-908523 kubelet[662]: E0319 19:11:30.091870 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:11:35 old-k8s-version-908523 kubelet[662]: E0319 19:11:35.091408 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: I0319 19:11:43.090401 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821
Mar 19 19:11:43 old-k8s-version-908523 kubelet[662]: E0319 19:11:43.091203 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:11:49 old-k8s-version-908523 kubelet[662]: E0319 19:11:49.091197 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:11:58 old-k8s-version-908523 kubelet[662]: I0319 19:11:58.090392 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821
Mar 19 19:11:58 old-k8s-version-908523 kubelet[662]: E0319 19:11:58.090729 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:12:02 old-k8s-version-908523 kubelet[662]: E0319 19:12:02.093089 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 19 19:12:11 old-k8s-version-908523 kubelet[662]: I0319 19:12:11.090424 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2b52ca5624f89d76d3e55a24a4e188cb355624008a88d4c5f25f51e9ba559821
Mar 19 19:12:11 old-k8s-version-908523 kubelet[662]: E0319 19:12:11.090780 662 pod_workers.go:191] Error syncing pod 7e41ca1c-c396-4ba2-ba1a-6c8d1629c686 ("dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rk5mc_kubernetes-dashboard(7e41ca1c-c396-4ba2-ba1a-6c8d1629c686)"
Mar 19 19:12:13 old-k8s-version-908523 kubelet[662]: E0319 19:12:13.098238 662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Mar 19 19:12:13 old-k8s-version-908523 kubelet[662]: E0319 19:12:13.098699 662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Mar 19 19:12:13 old-k8s-version-908523 kubelet[662]: E0319 19:12:13.098867 662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-rqzd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-rls8x_kube-system(e781962
d-7fc6-4cc9-b772-633328007948): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Mar 19 19:12:13 old-k8s-version-908523 kubelet[662]: E0319 19:12:13.098911 662 pod_workers.go:191] Error syncing pod e781962d-7fc6-4cc9-b772-633328007948 ("metrics-server-9975d5f86-rls8x_kube-system(e781962d-7fc6-4cc9-b772-633328007948)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
==> kubernetes-dashboard [12acba9ec13651241fb47ff8efae0746f0c7c6ed4345e5072db7acabca9840b8] <==
2025/03/19 19:06:43 Starting overwatch
2025/03/19 19:06:43 Using namespace: kubernetes-dashboard
2025/03/19 19:06:43 Using in-cluster config to connect to apiserver
2025/03/19 19:06:43 Using secret token for csrf signing
2025/03/19 19:06:43 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/03/19 19:06:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/03/19 19:06:43 Successful initial request to the apiserver, version: v1.20.0
2025/03/19 19:06:43 Generating JWE encryption key
2025/03/19 19:06:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/03/19 19:06:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/03/19 19:06:45 Initializing JWE encryption key from synchronized object
2025/03/19 19:06:45 Creating in-cluster Sidecar client
2025/03/19 19:06:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:06:45 Serving insecurely on HTTP port: 9090
2025/03/19 19:07:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:07:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:08:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:08:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:09:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:09:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:10:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:10:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:11:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:11:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/19 19:12:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [8498ea3c3e6bb5da63d36e362688359dfba7e99d768783e36b4cc50b6447f4cc] <==
I0319 19:06:21.792112 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0319 19:06:21.805857 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0319 19:06:21.806147 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0319 19:06:39.255372 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0319 19:06:39.255876 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908523_fe20e50d-5036-4672-9391-f64068b6edd4!
I0319 19:06:39.258300 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56465735-e897-48f3-bc4d-5b169a0c0729", APIVersion:"v1", ResourceVersion:"783", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-908523_fe20e50d-5036-4672-9391-f64068b6edd4 became leader
I0319 19:06:39.356909 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908523_fe20e50d-5036-4672-9391-f64068b6edd4!
==> storage-provisioner [f8ba5fb2a86cb53ce045af1c1ceaaef1411e0885bac1ca450f1774354bd477ec] <==
I0319 19:04:02.532234 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0319 19:04:02.558699 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0319 19:04:02.558746 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0319 19:04:02.627563 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0319 19:04:02.629555 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908523_b711349a-d90f-46c6-8368-0d704ed33453!
I0319 19:04:02.641826 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56465735-e897-48f3-bc4d-5b169a0c0729", APIVersion:"v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-908523_b711349a-d90f-46c6-8368-0d704ed33453 became leader
I0319 19:04:02.730587 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908523_b711349a-d90f-46c6-8368-0d704ed33453!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-908523 -n old-k8s-version-908523
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-908523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-rls8x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-908523 describe pod metrics-server-9975d5f86-rls8x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-908523 describe pod metrics-server-9975d5f86-rls8x: exit status 1 (97.823796ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-rls8x" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-908523 describe pod metrics-server-9975d5f86-rls8x: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (381.81s)