=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-807851 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-807851 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m16.844165461s)
-- stdout --
* [old-k8s-version-807851] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20591
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20591-279421/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-279421/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-807851" primary control-plane node in "old-k8s-version-807851" cluster
* Pulling base image v0.0.46-1743675393-20591 ...
* Restarting existing docker container for "old-k8s-version-807851" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-807851 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
-- /stdout --
** stderr **
I0403 18:56:25.115768 492847 out.go:345] Setting OutFile to fd 1 ...
I0403 18:56:25.115887 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:56:25.115899 492847 out.go:358] Setting ErrFile to fd 2...
I0403 18:56:25.115905 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:56:25.116174 492847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-279421/.minikube/bin
I0403 18:56:25.116581 492847 out.go:352] Setting JSON to false
I0403 18:56:25.117577 492847 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9536,"bootTime":1743697049,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0403 18:56:25.117704 492847 start.go:139] virtualization:
I0403 18:56:25.120819 492847 out.go:177] * [old-k8s-version-807851] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0403 18:56:25.124863 492847 out.go:177] - MINIKUBE_LOCATION=20591
I0403 18:56:25.125000 492847 notify.go:220] Checking for updates...
I0403 18:56:25.130835 492847 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0403 18:56:25.133793 492847 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20591-279421/kubeconfig
I0403 18:56:25.136697 492847 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-279421/.minikube
I0403 18:56:25.139585 492847 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0403 18:56:25.142476 492847 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0403 18:56:25.145903 492847 config.go:182] Loaded profile config "old-k8s-version-807851": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0403 18:56:25.149467 492847 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0403 18:56:25.152385 492847 driver.go:394] Setting default libvirt URI to qemu:///system
I0403 18:56:25.186236 492847 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0403 18:56:25.186364 492847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0403 18:56:25.245125 492847 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-03 18:56:25.235362645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0403 18:56:25.245240 492847 docker.go:318] overlay module found
I0403 18:56:25.248401 492847 out.go:177] * Using the docker driver based on existing profile
I0403 18:56:25.251301 492847 start.go:297] selected driver: docker
I0403 18:56:25.251325 492847 start.go:901] validating driver "docker" against &{Name:old-k8s-version-807851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807851 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0403 18:56:25.251428 492847 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0403 18:56:25.252172 492847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0403 18:56:25.302875 492847 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-03 18:56:25.293737493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0403 18:56:25.303225 492847 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0403 18:56:25.303259 492847 cni.go:84] Creating CNI manager for ""
I0403 18:56:25.303325 492847 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0403 18:56:25.303375 492847 start.go:340] cluster config:
{Name:old-k8s-version-807851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0403 18:56:25.308436 492847 out.go:177] * Starting "old-k8s-version-807851" primary control-plane node in "old-k8s-version-807851" cluster
I0403 18:56:25.311284 492847 cache.go:121] Beginning downloading kic base image for docker with containerd
I0403 18:56:25.314112 492847 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
I0403 18:56:25.316976 492847 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
I0403 18:56:25.317114 492847 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0403 18:56:25.317148 492847 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-279421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0403 18:56:25.317158 492847 cache.go:56] Caching tarball of preloaded images
I0403 18:56:25.317230 492847 preload.go:172] Found /home/jenkins/minikube-integration/20591-279421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0403 18:56:25.317246 492847 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0403 18:56:25.317366 492847 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/config.json ...
I0403 18:56:25.337544 492847 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
I0403 18:56:25.337573 492847 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
I0403 18:56:25.337591 492847 cache.go:230] Successfully downloaded all kic artifacts
I0403 18:56:25.337614 492847 start.go:360] acquireMachinesLock for old-k8s-version-807851: {Name:mkc26e47edf3a391900cda87d2a2d8919faf985e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0403 18:56:25.337710 492847 start.go:364] duration metric: took 73.125µs to acquireMachinesLock for "old-k8s-version-807851"
I0403 18:56:25.337736 492847 start.go:96] Skipping create...Using existing machine configuration
I0403 18:56:25.337745 492847 fix.go:54] fixHost starting:
I0403 18:56:25.337998 492847 cli_runner.go:164] Run: docker container inspect old-k8s-version-807851 --format={{.State.Status}}
I0403 18:56:25.354616 492847 fix.go:112] recreateIfNeeded on old-k8s-version-807851: state=Stopped err=<nil>
W0403 18:56:25.354647 492847 fix.go:138] unexpected machine state, will restart: <nil>
I0403 18:56:25.357798 492847 out.go:177] * Restarting existing docker container for "old-k8s-version-807851" ...
I0403 18:56:25.360623 492847 cli_runner.go:164] Run: docker start old-k8s-version-807851
I0403 18:56:25.635573 492847 cli_runner.go:164] Run: docker container inspect old-k8s-version-807851 --format={{.State.Status}}
I0403 18:56:25.660438 492847 kic.go:430] container "old-k8s-version-807851" state is running.
I0403 18:56:25.660822 492847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-807851
I0403 18:56:25.689715 492847 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/config.json ...
I0403 18:56:25.690006 492847 machine.go:93] provisionDockerMachine start ...
I0403 18:56:25.690078 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:25.724890 492847 main.go:141] libmachine: Using SSH client type: native
I0403 18:56:25.725288 492847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33429 <nil> <nil>}
I0403 18:56:25.725299 492847 main.go:141] libmachine: About to run SSH command:
hostname
I0403 18:56:25.728964 492847 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0403 18:56:28.862623 492847 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-807851
I0403 18:56:28.862714 492847 ubuntu.go:169] provisioning hostname "old-k8s-version-807851"
I0403 18:56:28.862814 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:28.900712 492847 main.go:141] libmachine: Using SSH client type: native
I0403 18:56:28.901015 492847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33429 <nil> <nil>}
I0403 18:56:28.901027 492847 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-807851 && echo "old-k8s-version-807851" | sudo tee /etc/hostname
I0403 18:56:29.058086 492847 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-807851
I0403 18:56:29.058177 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:29.095279 492847 main.go:141] libmachine: Using SSH client type: native
I0403 18:56:29.095629 492847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33429 <nil> <nil>}
I0403 18:56:29.095648 492847 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-807851' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-807851/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-807851' | sudo tee -a /etc/hosts;
fi
fi
I0403 18:56:29.246021 492847 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0403 18:56:29.246045 492847 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20591-279421/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-279421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-279421/.minikube}
I0403 18:56:29.246065 492847 ubuntu.go:177] setting up certificates
I0403 18:56:29.246076 492847 provision.go:84] configureAuth start
I0403 18:56:29.246159 492847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-807851
I0403 18:56:29.276466 492847 provision.go:143] copyHostCerts
I0403 18:56:29.276527 492847 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-279421/.minikube/cert.pem, removing ...
I0403 18:56:29.276544 492847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-279421/.minikube/cert.pem
I0403 18:56:29.276622 492847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-279421/.minikube/cert.pem (1123 bytes)
I0403 18:56:29.276726 492847 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-279421/.minikube/key.pem, removing ...
I0403 18:56:29.276732 492847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-279421/.minikube/key.pem
I0403 18:56:29.276760 492847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-279421/.minikube/key.pem (1675 bytes)
I0403 18:56:29.276824 492847 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-279421/.minikube/ca.pem, removing ...
I0403 18:56:29.276829 492847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-279421/.minikube/ca.pem
I0403 18:56:29.276856 492847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-279421/.minikube/ca.pem (1078 bytes)
I0403 18:56:29.276913 492847 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-279421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-807851 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-807851]
I0403 18:56:29.839503 492847 provision.go:177] copyRemoteCerts
I0403 18:56:29.839620 492847 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0403 18:56:29.839679 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:29.857493 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:29.946865 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0403 18:56:29.973362 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0403 18:56:29.998010 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0403 18:56:30.028455 492847 provision.go:87] duration metric: took 782.353887ms to configureAuth
I0403 18:56:30.028490 492847 ubuntu.go:193] setting minikube options for container-runtime
I0403 18:56:30.028713 492847 config.go:182] Loaded profile config "old-k8s-version-807851": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0403 18:56:30.028729 492847 machine.go:96] duration metric: took 4.338705396s to provisionDockerMachine
I0403 18:56:30.028739 492847 start.go:293] postStartSetup for "old-k8s-version-807851" (driver="docker")
I0403 18:56:30.028755 492847 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0403 18:56:30.028819 492847 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0403 18:56:30.028873 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:30.049773 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:30.146954 492847 ssh_runner.go:195] Run: cat /etc/os-release
I0403 18:56:30.150566 492847 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0403 18:56:30.150605 492847 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0403 18:56:30.150616 492847 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0403 18:56:30.150623 492847 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0403 18:56:30.150637 492847 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-279421/.minikube/addons for local assets ...
I0403 18:56:30.150706 492847 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-279421/.minikube/files for local assets ...
I0403 18:56:30.150840 492847 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-279421/.minikube/files/etc/ssl/certs/2848032.pem -> 2848032.pem in /etc/ssl/certs
I0403 18:56:30.151007 492847 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0403 18:56:30.160464 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/files/etc/ssl/certs/2848032.pem --> /etc/ssl/certs/2848032.pem (1708 bytes)
I0403 18:56:30.185266 492847 start.go:296] duration metric: took 156.504898ms for postStartSetup
I0403 18:56:30.185350 492847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0403 18:56:30.185390 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:30.202511 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:30.287212 492847 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0403 18:56:30.291471 492847 fix.go:56] duration metric: took 4.953719351s for fixHost
I0403 18:56:30.291496 492847 start.go:83] releasing machines lock for "old-k8s-version-807851", held for 4.953771266s
I0403 18:56:30.291564 492847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-807851
I0403 18:56:30.308291 492847 ssh_runner.go:195] Run: cat /version.json
I0403 18:56:30.308349 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:30.308636 492847 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0403 18:56:30.308697 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:30.331224 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:30.345815 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:30.437197 492847 ssh_runner.go:195] Run: systemctl --version
I0403 18:56:30.644913 492847 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0403 18:56:30.651361 492847 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0403 18:56:30.670620 492847 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0403 18:56:30.670691 492847 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0403 18:56:30.681497 492847 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0403 18:56:30.681520 492847 start.go:495] detecting cgroup driver to use...
I0403 18:56:30.681554 492847 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0403 18:56:30.681599 492847 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0403 18:56:30.698617 492847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0403 18:56:30.713433 492847 docker.go:217] disabling cri-docker service (if available) ...
I0403 18:56:30.713497 492847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0403 18:56:30.729338 492847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0403 18:56:30.744507 492847 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0403 18:56:30.878150 492847 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0403 18:56:30.969922 492847 docker.go:233] disabling docker service ...
I0403 18:56:30.969993 492847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0403 18:56:30.982929 492847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0403 18:56:30.994573 492847 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0403 18:56:31.078744 492847 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0403 18:56:31.157399 492847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0403 18:56:31.168693 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0403 18:56:31.184810 492847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0403 18:56:31.195862 492847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0403 18:56:31.206030 492847 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0403 18:56:31.206116 492847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0403 18:56:31.215770 492847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0403 18:56:31.225500 492847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0403 18:56:31.234918 492847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0403 18:56:31.244958 492847 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0403 18:56:31.254835 492847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0403 18:56:31.264479 492847 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0403 18:56:31.273354 492847 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0403 18:56:31.281364 492847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0403 18:56:31.368033 492847 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0403 18:56:31.561164 492847 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0403 18:56:31.561237 492847 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0403 18:56:31.565400 492847 start.go:563] Will wait 60s for crictl version
I0403 18:56:31.565458 492847 ssh_runner.go:195] Run: which crictl
I0403 18:56:31.569144 492847 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0403 18:56:31.605819 492847 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0403 18:56:31.605885 492847 ssh_runner.go:195] Run: containerd --version
I0403 18:56:31.632543 492847 ssh_runner.go:195] Run: containerd --version
I0403 18:56:31.658283 492847 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
I0403 18:56:31.661273 492847 cli_runner.go:164] Run: docker network inspect old-k8s-version-807851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0403 18:56:31.677236 492847 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0403 18:56:31.681076 492847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0403 18:56:31.693963 492847 kubeadm.go:883] updating cluster {Name:old-k8s-version-807851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807851 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0403 18:56:31.694086 492847 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0403 18:56:31.694145 492847 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:56:31.730532 492847 containerd.go:627] all images are preloaded for containerd runtime.
I0403 18:56:31.730552 492847 containerd.go:534] Images already preloaded, skipping extraction
I0403 18:56:31.730623 492847 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:56:31.769028 492847 containerd.go:627] all images are preloaded for containerd runtime.
I0403 18:56:31.769054 492847 cache_images.go:84] Images are preloaded, skipping loading
I0403 18:56:31.769063 492847 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0403 18:56:31.769176 492847 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-807851 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0403 18:56:31.769249 492847 ssh_runner.go:195] Run: sudo crictl info
I0403 18:56:31.805395 492847 cni.go:84] Creating CNI manager for ""
I0403 18:56:31.805421 492847 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0403 18:56:31.805433 492847 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0403 18:56:31.805454 492847 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-807851 NodeName:old-k8s-version-807851 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0403 18:56:31.805585 492847 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-807851"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0403 18:56:31.805677 492847 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0403 18:56:31.814651 492847 binaries.go:44] Found k8s binaries, skipping transfer
I0403 18:56:31.814721 492847 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0403 18:56:31.823661 492847 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0403 18:56:31.841806 492847 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0403 18:56:31.860225 492847 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0403 18:56:31.878726 492847 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0403 18:56:31.882143 492847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0403 18:56:31.892405 492847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0403 18:56:31.984495 492847 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0403 18:56:31.999209 492847 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851 for IP: 192.168.76.2
I0403 18:56:31.999275 492847 certs.go:194] generating shared ca certs ...
I0403 18:56:31.999305 492847 certs.go:226] acquiring lock for ca certs: {Name:mkbf9d260d0fbb63852ed66b616dcb8dddc3fa66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 18:56:31.999494 492847 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-279421/.minikube/ca.key
I0403 18:56:31.999565 492847 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-279421/.minikube/proxy-client-ca.key
I0403 18:56:31.999600 492847 certs.go:256] generating profile certs ...
I0403 18:56:31.999732 492847 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/client.key
I0403 18:56:31.999839 492847 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/apiserver.key.ed4ba8af
I0403 18:56:31.999908 492847 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/proxy-client.key
I0403 18:56:32.000067 492847 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/284803.pem (1338 bytes)
W0403 18:56:32.000123 492847 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-279421/.minikube/certs/284803_empty.pem, impossibly tiny 0 bytes
I0403 18:56:32.000147 492847 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca-key.pem (1675 bytes)
I0403 18:56:32.000207 492847 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem (1078 bytes)
I0403 18:56:32.000256 492847 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/cert.pem (1123 bytes)
I0403 18:56:32.000313 492847 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/key.pem (1675 bytes)
I0403 18:56:32.000386 492847 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/files/etc/ssl/certs/2848032.pem (1708 bytes)
I0403 18:56:32.001008 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0403 18:56:32.030268 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0403 18:56:32.059438 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0403 18:56:32.083926 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0403 18:56:32.113580 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0403 18:56:32.149450 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0403 18:56:32.175422 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0403 18:56:32.199323 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/old-k8s-version-807851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0403 18:56:32.224072 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/certs/284803.pem --> /usr/share/ca-certificates/284803.pem (1338 bytes)
I0403 18:56:32.249361 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/files/etc/ssl/certs/2848032.pem --> /usr/share/ca-certificates/2848032.pem (1708 bytes)
I0403 18:56:32.274405 492847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0403 18:56:32.298648 492847 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0403 18:56:32.316579 492847 ssh_runner.go:195] Run: openssl version
I0403 18:56:32.322057 492847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284803.pem && ln -fs /usr/share/ca-certificates/284803.pem /etc/ssl/certs/284803.pem"
I0403 18:56:32.331526 492847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284803.pem
I0403 18:56:32.335200 492847 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 3 18:19 /usr/share/ca-certificates/284803.pem
I0403 18:56:32.335266 492847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284803.pem
I0403 18:56:32.342099 492847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284803.pem /etc/ssl/certs/51391683.0"
I0403 18:56:32.351267 492847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2848032.pem && ln -fs /usr/share/ca-certificates/2848032.pem /etc/ssl/certs/2848032.pem"
I0403 18:56:32.360720 492847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2848032.pem
I0403 18:56:32.364316 492847 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 3 18:19 /usr/share/ca-certificates/2848032.pem
I0403 18:56:32.364416 492847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2848032.pem
I0403 18:56:32.371462 492847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2848032.pem /etc/ssl/certs/3ec20f2e.0"
I0403 18:56:32.380329 492847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0403 18:56:32.389890 492847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0403 18:56:32.393538 492847 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 3 18:12 /usr/share/ca-certificates/minikubeCA.pem
I0403 18:56:32.393612 492847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0403 18:56:32.400803 492847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0403 18:56:32.409853 492847 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0403 18:56:32.413429 492847 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0403 18:56:32.420385 492847 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0403 18:56:32.427317 492847 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0403 18:56:32.434388 492847 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0403 18:56:32.441898 492847 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0403 18:56:32.448861 492847 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0403 18:56:32.455961 492847 kubeadm.go:392] StartCluster: {Name:old-k8s-version-807851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807851 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0403 18:56:32.456052 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0403 18:56:32.456114 492847 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0403 18:56:32.499833 492847 cri.go:89] found id: "2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 18:56:32.499866 492847 cri.go:89] found id: "a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 18:56:32.499873 492847 cri.go:89] found id: "12bf0ce63763aa1909530e03246437e9fde1b9ccdfed558f376b3ccbc8ca3ad4"
I0403 18:56:32.499877 492847 cri.go:89] found id: "1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 18:56:32.499906 492847 cri.go:89] found id: "dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 18:56:32.499920 492847 cri.go:89] found id: "5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 18:56:32.499924 492847 cri.go:89] found id: "d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 18:56:32.499928 492847 cri.go:89] found id: "708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 18:56:32.499932 492847 cri.go:89] found id: ""
I0403 18:56:32.499997 492847 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0403 18:56:32.515776 492847 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-04-03T18:56:32Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0403 18:56:32.515867 492847 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0403 18:56:32.525546 492847 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0403 18:56:32.525567 492847 kubeadm.go:593] restartPrimaryControlPlane start ...
I0403 18:56:32.525702 492847 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0403 18:56:32.535411 492847 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0403 18:56:32.535962 492847 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-807851" does not appear in /home/jenkins/minikube-integration/20591-279421/kubeconfig
I0403 18:56:32.536198 492847 kubeconfig.go:62] /home/jenkins/minikube-integration/20591-279421/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-807851" cluster setting kubeconfig missing "old-k8s-version-807851" context setting]
I0403 18:56:32.536711 492847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/kubeconfig: {Name:mkd56fac60608d6ef399d7920f9889f463e24d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 18:56:32.539313 492847 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0403 18:56:32.551052 492847 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0403 18:56:32.551089 492847 kubeadm.go:597] duration metric: took 25.515494ms to restartPrimaryControlPlane
I0403 18:56:32.551099 492847 kubeadm.go:394] duration metric: took 95.14673ms to StartCluster
I0403 18:56:32.551134 492847 settings.go:142] acquiring lock: {Name:mkda4ef6aa45ba7450baec7632aaddbe8adae188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 18:56:32.551222 492847 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20591-279421/kubeconfig
I0403 18:56:32.552115 492847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/kubeconfig: {Name:mkd56fac60608d6ef399d7920f9889f463e24d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 18:56:32.552375 492847 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0403 18:56:32.552819 492847 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0403 18:56:32.552897 492847 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-807851"
I0403 18:56:32.552921 492847 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-807851"
W0403 18:56:32.552932 492847 addons.go:247] addon storage-provisioner should already be in state true
I0403 18:56:32.552954 492847 host.go:66] Checking if "old-k8s-version-807851" exists ...
I0403 18:56:32.553457 492847 cli_runner.go:164] Run: docker container inspect old-k8s-version-807851 --format={{.State.Status}}
I0403 18:56:32.553716 492847 config.go:182] Loaded profile config "old-k8s-version-807851": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0403 18:56:32.553820 492847 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-807851"
I0403 18:56:32.553859 492847 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-807851"
I0403 18:56:32.554170 492847 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-807851"
I0403 18:56:32.554184 492847 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-807851"
W0403 18:56:32.554191 492847 addons.go:247] addon metrics-server should already be in state true
I0403 18:56:32.554212 492847 host.go:66] Checking if "old-k8s-version-807851" exists ...
I0403 18:56:32.554577 492847 cli_runner.go:164] Run: docker container inspect old-k8s-version-807851 --format={{.State.Status}}
I0403 18:56:32.554741 492847 cli_runner.go:164] Run: docker container inspect old-k8s-version-807851 --format={{.State.Status}}
I0403 18:56:32.557975 492847 addons.go:69] Setting dashboard=true in profile "old-k8s-version-807851"
I0403 18:56:32.558008 492847 addons.go:238] Setting addon dashboard=true in "old-k8s-version-807851"
W0403 18:56:32.558017 492847 addons.go:247] addon dashboard should already be in state true
I0403 18:56:32.558067 492847 host.go:66] Checking if "old-k8s-version-807851" exists ...
I0403 18:56:32.558322 492847 out.go:177] * Verifying Kubernetes components...
I0403 18:56:32.558670 492847 cli_runner.go:164] Run: docker container inspect old-k8s-version-807851 --format={{.State.Status}}
I0403 18:56:32.561571 492847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0403 18:56:32.592139 492847 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0403 18:56:32.600887 492847 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0403 18:56:32.600920 492847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0403 18:56:32.600989 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:32.612757 492847 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0403 18:56:32.615860 492847 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0403 18:56:32.618922 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0403 18:56:32.618949 492847 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0403 18:56:32.619028 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:32.635382 492847 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-807851"
W0403 18:56:32.635405 492847 addons.go:247] addon default-storageclass should already be in state true
I0403 18:56:32.635432 492847 host.go:66] Checking if "old-k8s-version-807851" exists ...
I0403 18:56:32.635616 492847 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0403 18:56:32.635834 492847 cli_runner.go:164] Run: docker container inspect old-k8s-version-807851 --format={{.State.Status}}
I0403 18:56:32.644769 492847 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0403 18:56:32.645033 492847 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0403 18:56:32.645121 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:32.681797 492847 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0403 18:56:32.681818 492847 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0403 18:56:32.681876 492847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807851
I0403 18:56:32.686249 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:32.695501 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:32.705735 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:32.707856 492847 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0403 18:56:32.713348 492847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/old-k8s-version-807851/id_rsa Username:docker}
I0403 18:56:32.739604 492847 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-807851" to be "Ready" ...
I0403 18:56:32.836216 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0403 18:56:32.847276 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0403 18:56:32.853800 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0403 18:56:32.853864 492847 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0403 18:56:32.878623 492847 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0403 18:56:32.878688 492847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0403 18:56:32.885021 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0403 18:56:32.885090 492847 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0403 18:56:32.929980 492847 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0403 18:56:32.930053 492847 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0403 18:56:32.935442 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0403 18:56:32.935506 492847 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
W0403 18:56:33.015618 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.015723 492847 retry.go:31] will retry after 309.326539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.016059 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0403 18:56:33.016104 492847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0403 18:56:33.018754 492847 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0403 18:56:33.018822 492847 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
W0403 18:56:33.024417 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.024502 492847 retry.go:31] will retry after 296.530181ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.041985 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0403 18:56:33.041974 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0403 18:56:33.042051 492847 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0403 18:56:33.064586 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0403 18:56:33.064617 492847 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0403 18:56:33.084859 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0403 18:56:33.084887 492847 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0403 18:56:33.107358 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0403 18:56:33.107385 492847 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0403 18:56:33.138488 492847 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0403 18:56:33.138516 492847 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
W0403 18:56:33.139132 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.139161 492847 retry.go:31] will retry after 344.499322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.158269 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0403 18:56:33.232797 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.232825 492847 retry.go:31] will retry after 336.446065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.321573 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0403 18:56:33.325851 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0403 18:56:33.409887 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.409929 492847 retry.go:31] will retry after 498.649594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0403 18:56:33.424046 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.424079 492847 retry.go:31] will retry after 345.046982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.484362 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0403 18:56:33.553771 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.553806 492847 retry.go:31] will retry after 402.774029ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.570141 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0403 18:56:33.639285 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.639320 492847 retry.go:31] will retry after 379.094275ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.770124 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0403 18:56:33.840641 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.840673 492847 retry.go:31] will retry after 470.879075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.908862 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0403 18:56:33.957851 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0403 18:56:33.990437 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:33.990473 492847 retry.go:31] will retry after 422.685346ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.018739 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0403 18:56:34.044839 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.044918 492847 retry.go:31] will retry after 369.066452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0403 18:56:34.093715 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.093759 492847 retry.go:31] will retry after 335.564696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.312503 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0403 18:56:34.383758 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.383791 492847 retry.go:31] will retry after 737.276109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.413943 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0403 18:56:34.414136 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0403 18:56:34.429892 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0403 18:56:34.547264 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.547341 492847 retry.go:31] will retry after 1.073365913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0403 18:56:34.547429 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.547457 492847 retry.go:31] will retry after 759.248323ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0403 18:56:34.563399 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.563432 492847 retry.go:31] will retry after 1.003107751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:34.741062 492847 node_ready.go:53] error getting node "old-k8s-version-807851": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-807851": dial tcp 192.168.76.2:8443: connect: connection refused
I0403 18:56:35.121785 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0403 18:56:35.196269 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:35.196308 492847 retry.go:31] will retry after 1.398020282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:35.306997 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0403 18:56:35.377274 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:35.377305 492847 retry.go:31] will retry after 1.4487675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:35.567702 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0403 18:56:35.621085 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0403 18:56:35.637064 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:35.637140 492847 retry.go:31] will retry after 1.152697025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0403 18:56:35.698653 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:35.698684 492847 retry.go:31] will retry after 1.70129416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:36.594806 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0403 18:56:36.672451 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:36.672484 492847 retry.go:31] will retry after 1.493675757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:36.790284 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0403 18:56:36.826722 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0403 18:56:36.865356 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:36.865426 492847 retry.go:31] will retry after 2.480579008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0403 18:56:36.908710 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:36.908789 492847 retry.go:31] will retry after 1.039127578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:37.240282 492847 node_ready.go:53] error getting node "old-k8s-version-807851": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-807851": dial tcp 192.168.76.2:8443: connect: connection refused
I0403 18:56:37.400749 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0403 18:56:37.477345 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:37.477379 492847 retry.go:31] will retry after 2.013564751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:37.948678 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0403 18:56:38.017925 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:38.017960 492847 retry.go:31] will retry after 3.146865015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:38.167267 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0403 18:56:38.239116 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:38.239147 492847 retry.go:31] will retry after 3.112601926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:39.240673 492847 node_ready.go:53] error getting node "old-k8s-version-807851": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-807851": dial tcp 192.168.76.2:8443: connect: connection refused
I0403 18:56:39.347046 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0403 18:56:39.417938 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:39.417978 492847 retry.go:31] will retry after 4.179761522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:39.491160 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0403 18:56:39.567170 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:39.567213 492847 retry.go:31] will retry after 1.904494066s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:41.165891 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0403 18:56:41.341343 492847 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:41.341376 492847 retry.go:31] will retry after 6.132234049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0403 18:56:41.351928 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0403 18:56:41.472655 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0403 18:56:43.597944 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0403 18:56:47.475680 492847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0403 18:56:49.973563 492847 node_ready.go:49] node "old-k8s-version-807851" has status "Ready":"True"
I0403 18:56:49.973583 492847 node_ready.go:38] duration metric: took 17.233901076s for node "old-k8s-version-807851" to be "Ready" ...
I0403 18:56:49.973593 492847 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0403 18:56:50.232846 492847 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-bgscq" in "kube-system" namespace to be "Ready" ...
I0403 18:56:50.777475 492847 pod_ready.go:93] pod "coredns-74ff55c5b-bgscq" in "kube-system" namespace has status "Ready":"True"
I0403 18:56:50.777500 492847 pod_ready.go:82] duration metric: took 544.577629ms for pod "coredns-74ff55c5b-bgscq" in "kube-system" namespace to be "Ready" ...
I0403 18:56:50.777512 492847 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-807851" in "kube-system" namespace to be "Ready" ...
I0403 18:56:50.977036 492847 pod_ready.go:93] pod "etcd-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"True"
I0403 18:56:50.977058 492847 pod_ready.go:82] duration metric: took 199.538372ms for pod "etcd-old-k8s-version-807851" in "kube-system" namespace to be "Ready" ...
I0403 18:56:50.977073 492847 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-807851" in "kube-system" namespace to be "Ready" ...
I0403 18:56:53.099277 492847 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:56:54.198557 492847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.846593s)
I0403 18:56:54.198664 492847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.725979253s)
I0403 18:56:54.198682 492847 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-807851"
I0403 18:56:54.198761 492847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.600789042s)
I0403 18:56:54.198948 492847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.723233921s)
I0403 18:56:54.202322 492847 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-807851 addons enable metrics-server
I0403 18:56:54.210859 492847 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I0403 18:56:54.214099 492847 addons.go:514] duration metric: took 21.661273727s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I0403 18:56:55.481292 492847 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:56:57.483586 492847 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:56:59.484268 492847 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"True"
I0403 18:56:59.484298 492847 pod_ready.go:82] duration metric: took 8.507216723s for pod "kube-apiserver-old-k8s-version-807851" in "kube-system" namespace to be "Ready" ...
I0403 18:56:59.484310 492847 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace to be "Ready" ...
I0403 18:57:01.490825 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:03.989917 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:05.990578 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:07.991335 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:10.074615 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:12.493608 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:14.991008 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:17.490272 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:19.490529 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:21.498999 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:23.989883 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:26.492409 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:28.990485 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:31.489820 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:33.489910 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:35.490849 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:37.989771 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:40.489956 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:42.490049 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:44.490285 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:46.995312 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:49.491163 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:51.990010 492847 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:52.990870 492847 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"True"
I0403 18:57:52.990898 492847 pod_ready.go:82] duration metric: took 53.50658032s for pod "kube-controller-manager-old-k8s-version-807851" in "kube-system" namespace to be "Ready" ...
I0403 18:57:52.990911 492847 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lb5pb" in "kube-system" namespace to be "Ready" ...
I0403 18:57:52.995934 492847 pod_ready.go:93] pod "kube-proxy-lb5pb" in "kube-system" namespace has status "Ready":"True"
I0403 18:57:52.995962 492847 pod_ready.go:82] duration metric: took 5.043971ms for pod "kube-proxy-lb5pb" in "kube-system" namespace to be "Ready" ...
I0403 18:57:52.995975 492847 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace to be "Ready" ...
I0403 18:57:55.001541 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:57.001717 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:57:59.501522 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:01.502423 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:04.001750 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:06.001860 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:08.002152 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:10.501452 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:13.002244 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:15.501625 492847 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:18.001625 492847 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace has status "Ready":"True"
I0403 18:58:18.001689 492847 pod_ready.go:82] duration metric: took 25.005706171s for pod "kube-scheduler-old-k8s-version-807851" in "kube-system" namespace to be "Ready" ...
I0403 18:58:18.001703 492847 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace to be "Ready" ...
I0403 18:58:20.015063 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:22.506950 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:24.507547 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:27.008632 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:29.507592 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:31.508964 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:34.011777 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:36.012088 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:38.514464 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:40.531706 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:43.008097 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:45.021501 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:47.506771 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:49.508360 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:52.008099 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:54.008713 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:56.507158 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:58:59.007727 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:01.507310 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:04.010179 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:06.011562 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:08.013074 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:10.508062 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:13.007583 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:15.009324 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:17.507744 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:19.507908 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:22.007625 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:24.008002 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:26.008124 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:28.011741 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:30.042941 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:32.507268 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:35.009289 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:37.507438 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:40.010229 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:42.506874 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:44.507332 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:47.007904 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:49.008259 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:51.018533 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:53.507590 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:56.008289 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 18:59:58.014707 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:00.024384 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:02.510751 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:05.012004 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:07.507200 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:10.016082 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:12.016488 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:14.506470 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:16.507312 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:18.507495 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:21.008779 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:23.507331 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:25.507354 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:27.507700 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:30.023042 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:32.507413 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:34.507871 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:37.009694 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:39.508050 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:42.010573 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:44.507977 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:47.008131 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:49.008166 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:51.009331 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:53.507048 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:55.507606 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:00:58.007902 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:00.017262 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:02.506680 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:04.507839 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:07.008353 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:09.506922 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:11.515959 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:14.017100 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:16.506675 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:18.507548 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:21.007932 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:23.008486 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:25.010428 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:27.507672 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:29.509809 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:31.510585 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:34.008635 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:36.010209 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:38.507438 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:40.510062 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:43.009733 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:45.011783 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:47.508770 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:50.009368 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:52.009507 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:54.014846 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:56.507276 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:58.512189 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:01.008389 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:03.009414 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:05.507572 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:08.009585 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:10.012914 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:12.587566 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:15.016096 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:17.507032 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:18.002529 492847 pod_ready.go:82] duration metric: took 4m0.000807184s for pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace to be "Ready" ...
E0403 19:02:18.002563 492847 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0403 19:02:18.002574 492847 pod_ready.go:39] duration metric: took 5m28.028966474s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0403 19:02:18.002594 492847 api_server.go:52] waiting for apiserver process to appear ...
I0403 19:02:18.002632 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0403 19:02:18.002712 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0403 19:02:18.046159 492847 cri.go:89] found id: "6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a"
I0403 19:02:18.046184 492847 cri.go:89] found id: "708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 19:02:18.046190 492847 cri.go:89] found id: ""
I0403 19:02:18.046198 492847 logs.go:282] 2 containers: [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a 708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf]
I0403 19:02:18.046261 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.050381 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.054309 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0403 19:02:18.054394 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0403 19:02:18.095726 492847 cri.go:89] found id: "1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0"
I0403 19:02:18.095750 492847 cri.go:89] found id: "d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 19:02:18.095755 492847 cri.go:89] found id: ""
I0403 19:02:18.095763 492847 logs.go:282] 2 containers: [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0 d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2]
I0403 19:02:18.095822 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.099427 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.103135 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0403 19:02:18.103211 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0403 19:02:18.143656 492847 cri.go:89] found id: "390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9"
I0403 19:02:18.143686 492847 cri.go:89] found id: "2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 19:02:18.143693 492847 cri.go:89] found id: ""
I0403 19:02:18.143703 492847 logs.go:282] 2 containers: [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9 2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36]
I0403 19:02:18.143790 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.147571 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.151350 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0403 19:02:18.151460 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0403 19:02:18.190593 492847 cri.go:89] found id: "a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33"
I0403 19:02:18.190618 492847 cri.go:89] found id: "5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 19:02:18.190624 492847 cri.go:89] found id: ""
I0403 19:02:18.190631 492847 logs.go:282] 2 containers: [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33 5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf]
I0403 19:02:18.190693 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.194425 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.198188 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0403 19:02:18.198265 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0403 19:02:18.245589 492847 cri.go:89] found id: "34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c"
I0403 19:02:18.245704 492847 cri.go:89] found id: "1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 19:02:18.245726 492847 cri.go:89] found id: ""
I0403 19:02:18.245741 492847 logs.go:282] 2 containers: [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c 1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8]
I0403 19:02:18.245817 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.249764 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.253223 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0403 19:02:18.253342 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0403 19:02:18.294187 492847 cri.go:89] found id: "d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e"
I0403 19:02:18.294213 492847 cri.go:89] found id: "dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 19:02:18.294219 492847 cri.go:89] found id: ""
I0403 19:02:18.294227 492847 logs.go:282] 2 containers: [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426]
I0403 19:02:18.294287 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.297832 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.301208 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0403 19:02:18.301277 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0403 19:02:18.339338 492847 cri.go:89] found id: "399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7"
I0403 19:02:18.339357 492847 cri.go:89] found id: "a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 19:02:18.339362 492847 cri.go:89] found id: ""
I0403 19:02:18.339369 492847 logs.go:282] 2 containers: [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7 a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4]
I0403 19:02:18.339425 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.343067 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.346263 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0403 19:02:18.346366 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0403 19:02:18.386867 492847 cri.go:89] found id: "982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c"
I0403 19:02:18.386893 492847 cri.go:89] found id: ""
I0403 19:02:18.386902 492847 logs.go:282] 1 containers: [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c]
I0403 19:02:18.386960 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.390505 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0403 19:02:18.390634 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0403 19:02:18.426483 492847 cri.go:89] found id: "110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c"
I0403 19:02:18.426549 492847 cri.go:89] found id: "0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e"
I0403 19:02:18.426559 492847 cri.go:89] found id: ""
I0403 19:02:18.426567 492847 logs.go:282] 2 containers: [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c 0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e]
I0403 19:02:18.426636 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.430078 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.433464 492847 logs.go:123] Gathering logs for storage-provisioner [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c] ...
I0403 19:02:18.433488 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c"
I0403 19:02:18.474945 492847 logs.go:123] Gathering logs for etcd [d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2] ...
I0403 19:02:18.474973 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 19:02:18.525243 492847 logs.go:123] Gathering logs for coredns [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9] ...
I0403 19:02:18.525274 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9"
I0403 19:02:18.568513 492847 logs.go:123] Gathering logs for kube-proxy [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c] ...
I0403 19:02:18.568542 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c"
I0403 19:02:18.619394 492847 logs.go:123] Gathering logs for kube-controller-manager [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e] ...
I0403 19:02:18.619424 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e"
I0403 19:02:18.703917 492847 logs.go:123] Gathering logs for kubernetes-dashboard [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c] ...
I0403 19:02:18.703994 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c"
I0403 19:02:18.782639 492847 logs.go:123] Gathering logs for containerd ...
I0403 19:02:18.782718 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0403 19:02:18.848208 492847 logs.go:123] Gathering logs for kube-apiserver [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a] ...
I0403 19:02:18.848291 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a"
I0403 19:02:18.931802 492847 logs.go:123] Gathering logs for kube-scheduler [5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf] ...
I0403 19:02:18.931881 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 19:02:18.994785 492847 logs.go:123] Gathering logs for kube-proxy [1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8] ...
I0403 19:02:18.994868 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 19:02:19.037717 492847 logs.go:123] Gathering logs for kindnet [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7] ...
I0403 19:02:19.037747 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7"
I0403 19:02:19.092893 492847 logs.go:123] Gathering logs for container status ...
I0403 19:02:19.092931 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0403 19:02:19.140985 492847 logs.go:123] Gathering logs for kubelet ...
I0403 19:02:19.141017 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0403 19:02:19.189855 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866658 655 reflector.go:138] object-"kube-system"/"kube-proxy-token-2w58r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2w58r" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190079 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866762 655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190292 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866836 655 reflector.go:138] object-"kube-system"/"coredns-token-dlgd7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-dlgd7" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190516 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866896 655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190734 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866962 655 reflector.go:138] object-"kube-system"/"kindnet-token-5tz9w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5tz9w" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190963 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.867057 655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-72p5s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-72p5s" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.191185 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.882691 655 reflector.go:138] object-"default"/"default-token-pflvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pflvq" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.198869 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:52 old-k8s-version-807851 kubelet[655]: E0403 18:56:52.176260 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.199062 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:52 old-k8s-version-807851 kubelet[655]: E0403 18:56:52.321913 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.203526 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:07 old-k8s-version-807851 kubelet[655]: E0403 18:57:07.491831 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.205222 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:18 old-k8s-version-807851 kubelet[655]: E0403 18:57:18.480376 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.206038 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:21 old-k8s-version-807851 kubelet[655]: E0403 18:57:21.487400 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.206500 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:22 old-k8s-version-807851 kubelet[655]: E0403 18:57:22.492577 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.206829 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:23 old-k8s-version-807851 kubelet[655]: E0403 18:57:23.494384 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.207269 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:25 old-k8s-version-807851 kubelet[655]: E0403 18:57:25.512501 655 pod_workers.go:191] Error syncing pod 11226bcd-ff55-42fd-aee7-efbfee400f0d ("storage-provisioner_kube-system(11226bcd-ff55-42fd-aee7-efbfee400f0d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(11226bcd-ff55-42fd-aee7-efbfee400f0d)"
W0403 19:02:19.211083 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:32 old-k8s-version-807851 kubelet[655]: E0403 18:57:32.491653 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.211693 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:37 old-k8s-version-807851 kubelet[655]: E0403 18:57:37.554635 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.212154 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:42 old-k8s-version-807851 kubelet[655]: E0403 18:57:42.438627 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.212348 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:43 old-k8s-version-807851 kubelet[655]: E0403 18:57:43.477336 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.212716 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:53 old-k8s-version-807851 kubelet[655]: E0403 18:57:53.476964 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.212904 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:57 old-k8s-version-807851 kubelet[655]: E0403 18:57:57.477386 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.213490 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:05 old-k8s-version-807851 kubelet[655]: E0403 18:58:05.636989 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.213829 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:12 old-k8s-version-807851 kubelet[655]: E0403 18:58:12.438564 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.214014 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:12 old-k8s-version-807851 kubelet[655]: E0403 18:58:12.477281 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.216809 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:23 old-k8s-version-807851 kubelet[655]: E0403 18:58:23.488466 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.217148 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:24 old-k8s-version-807851 kubelet[655]: E0403 18:58:24.477118 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.217863 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:35 old-k8s-version-807851 kubelet[655]: E0403 18:58:35.476943 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.218063 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:37 old-k8s-version-807851 kubelet[655]: E0403 18:58:37.477387 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.218657 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:48 old-k8s-version-807851 kubelet[655]: E0403 18:58:48.750238 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.218842 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:50 old-k8s-version-807851 kubelet[655]: E0403 18:58:50.478191 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.219182 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:52 old-k8s-version-807851 kubelet[655]: E0403 18:58:52.438533 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.219370 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:04 old-k8s-version-807851 kubelet[655]: E0403 18:59:04.477211 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.219697 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:05 old-k8s-version-807851 kubelet[655]: E0403 18:59:05.476890 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.220013 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:19 old-k8s-version-807851 kubelet[655]: E0403 18:59:19.477686 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.220209 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:19 old-k8s-version-807851 kubelet[655]: E0403 18:59:19.478097 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.220399 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:31 old-k8s-version-807851 kubelet[655]: E0403 18:59:31.477291 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.220725 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:33 old-k8s-version-807851 kubelet[655]: E0403 18:59:33.476857 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.223179 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:46 old-k8s-version-807851 kubelet[655]: E0403 18:59:46.487501 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.223517 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:48 old-k8s-version-807851 kubelet[655]: E0403 18:59:48.477329 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.223702 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:59 old-k8s-version-807851 kubelet[655]: E0403 18:59:59.477203 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.224028 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:01 old-k8s-version-807851 kubelet[655]: E0403 19:00:01.477105 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.224212 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:14 old-k8s-version-807851 kubelet[655]: E0403 19:00:14.477636 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.224809 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:16 old-k8s-version-807851 kubelet[655]: E0403 19:00:16.983011 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.225135 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:22 old-k8s-version-807851 kubelet[655]: E0403 19:00:22.438177 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.225320 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:29 old-k8s-version-807851 kubelet[655]: E0403 19:00:29.477285 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.225657 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:37 old-k8s-version-807851 kubelet[655]: E0403 19:00:37.476888 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.225846 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:43 old-k8s-version-807851 kubelet[655]: E0403 19:00:43.477242 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.226171 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:52 old-k8s-version-807851 kubelet[655]: E0403 19:00:52.481943 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.226356 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:58 old-k8s-version-807851 kubelet[655]: E0403 19:00:58.477202 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.226687 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:04 old-k8s-version-807851 kubelet[655]: E0403 19:01:04.483258 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.226871 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:13 old-k8s-version-807851 kubelet[655]: E0403 19:01:13.477232 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.227197 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:19 old-k8s-version-807851 kubelet[655]: E0403 19:01:19.476987 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.227381 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:24 old-k8s-version-807851 kubelet[655]: E0403 19:01:24.477484 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.227707 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:33 old-k8s-version-807851 kubelet[655]: E0403 19:01:33.477359 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.227892 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:39 old-k8s-version-807851 kubelet[655]: E0403 19:01:39.477333 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.228216 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:47 old-k8s-version-807851 kubelet[655]: E0403 19:01:47.476853 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.228403 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:53 old-k8s-version-807851 kubelet[655]: E0403 19:01:53.477144 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.228728 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: E0403 19:01:58.477092 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.228909 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.229235 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.229421 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0403 19:02:19.229433 492847 logs.go:123] Gathering logs for dmesg ...
I0403 19:02:19.229447 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0403 19:02:19.246484 492847 logs.go:123] Gathering logs for kube-apiserver [708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf] ...
I0403 19:02:19.246514 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 19:02:19.306866 492847 logs.go:123] Gathering logs for kube-scheduler [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33] ...
I0403 19:02:19.306906 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33"
I0403 19:02:19.346628 492847 logs.go:123] Gathering logs for kube-controller-manager [dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426] ...
I0403 19:02:19.346657 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 19:02:19.416616 492847 logs.go:123] Gathering logs for kindnet [a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4] ...
I0403 19:02:19.416648 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 19:02:19.456765 492847 logs.go:123] Gathering logs for storage-provisioner [0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e] ...
I0403 19:02:19.456794 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e"
I0403 19:02:19.501537 492847 logs.go:123] Gathering logs for describe nodes ...
I0403 19:02:19.501564 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0403 19:02:19.661763 492847 logs.go:123] Gathering logs for etcd [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0] ...
I0403 19:02:19.661791 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0"
I0403 19:02:19.725922 492847 logs.go:123] Gathering logs for coredns [2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36] ...
I0403 19:02:19.725950 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 19:02:19.789868 492847 out.go:358] Setting ErrFile to fd 2...
I0403 19:02:19.789893 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0403 19:02:19.789944 492847 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0403 19:02:19.789957 492847 out.go:270] Apr 03 19:01:53 old-k8s-version-807851 kubelet[655]: E0403 19:01:53.477144 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:01:53 old-k8s-version-807851 kubelet[655]: E0403 19:01:53.477144 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.789972 492847 out.go:270] Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: E0403 19:01:58.477092 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: E0403 19:01:58.477092 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.789994 492847 out.go:270] Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.790005 492847 out.go:270] Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.790019 492847 out.go:270] Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0403 19:02:19.790030 492847 out.go:358] Setting ErrFile to fd 2...
I0403 19:02:19.790037 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 19:02:29.791382 492847 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0403 19:02:29.803974 492847 api_server.go:72] duration metric: took 5m57.251559586s to wait for apiserver process to appear ...
I0403 19:02:29.803998 492847 api_server.go:88] waiting for apiserver healthz status ...
I0403 19:02:29.804036 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0403 19:02:29.804099 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0403 19:02:29.840931 492847 cri.go:89] found id: "6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a"
I0403 19:02:29.840957 492847 cri.go:89] found id: "708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 19:02:29.840962 492847 cri.go:89] found id: ""
I0403 19:02:29.840970 492847 logs.go:282] 2 containers: [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a 708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf]
I0403 19:02:29.841027 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.844658 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.848226 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0403 19:02:29.848320 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0403 19:02:29.885827 492847 cri.go:89] found id: "1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0"
I0403 19:02:29.885852 492847 cri.go:89] found id: "d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 19:02:29.885858 492847 cri.go:89] found id: ""
I0403 19:02:29.885865 492847 logs.go:282] 2 containers: [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0 d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2]
I0403 19:02:29.885924 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.889523 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.893170 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0403 19:02:29.893245 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0403 19:02:29.933618 492847 cri.go:89] found id: "390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9"
I0403 19:02:29.933688 492847 cri.go:89] found id: "2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 19:02:29.933695 492847 cri.go:89] found id: ""
I0403 19:02:29.933703 492847 logs.go:282] 2 containers: [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9 2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36]
I0403 19:02:29.933765 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.937392 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.940763 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0403 19:02:29.940833 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0403 19:02:29.981902 492847 cri.go:89] found id: "a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33"
I0403 19:02:29.981926 492847 cri.go:89] found id: "5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 19:02:29.981932 492847 cri.go:89] found id: ""
I0403 19:02:29.981942 492847 logs.go:282] 2 containers: [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33 5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf]
I0403 19:02:29.982001 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.986102 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.989577 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0403 19:02:29.989692 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0403 19:02:30.036713 492847 cri.go:89] found id: "34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c"
I0403 19:02:30.036738 492847 cri.go:89] found id: "1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 19:02:30.036745 492847 cri.go:89] found id: ""
I0403 19:02:30.036753 492847 logs.go:282] 2 containers: [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c 1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8]
I0403 19:02:30.036822 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.041428 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.046110 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0403 19:02:30.046205 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0403 19:02:30.090457 492847 cri.go:89] found id: "d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e"
I0403 19:02:30.090492 492847 cri.go:89] found id: "dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 19:02:30.090498 492847 cri.go:89] found id: ""
I0403 19:02:30.090505 492847 logs.go:282] 2 containers: [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426]
I0403 19:02:30.090569 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.094766 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.098786 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0403 19:02:30.098874 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0403 19:02:30.138440 492847 cri.go:89] found id: "399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7"
I0403 19:02:30.138519 492847 cri.go:89] found id: "a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 19:02:30.138533 492847 cri.go:89] found id: ""
I0403 19:02:30.138542 492847 logs.go:282] 2 containers: [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7 a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4]
I0403 19:02:30.138618 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.142678 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.146521 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0403 19:02:30.146654 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0403 19:02:30.186035 492847 cri.go:89] found id: "110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c"
I0403 19:02:30.186071 492847 cri.go:89] found id: "0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e"
I0403 19:02:30.186077 492847 cri.go:89] found id: ""
I0403 19:02:30.186085 492847 logs.go:282] 2 containers: [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c 0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e]
I0403 19:02:30.186184 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.190175 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.194031 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0403 19:02:30.194127 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0403 19:02:30.234226 492847 cri.go:89] found id: "982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c"
I0403 19:02:30.234251 492847 cri.go:89] found id: ""
I0403 19:02:30.234259 492847 logs.go:282] 1 containers: [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c]
I0403 19:02:30.234338 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.237787 492847 logs.go:123] Gathering logs for coredns [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9] ...
I0403 19:02:30.237817 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9"
I0403 19:02:30.279572 492847 logs.go:123] Gathering logs for kube-scheduler [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33] ...
I0403 19:02:30.279607 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33"
I0403 19:02:30.321779 492847 logs.go:123] Gathering logs for kube-proxy [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c] ...
I0403 19:02:30.321806 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c"
I0403 19:02:30.358470 492847 logs.go:123] Gathering logs for storage-provisioner [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c] ...
I0403 19:02:30.358495 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c"
I0403 19:02:30.398042 492847 logs.go:123] Gathering logs for dmesg ...
I0403 19:02:30.398072 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0403 19:02:30.414911 492847 logs.go:123] Gathering logs for etcd [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0] ...
I0403 19:02:30.414936 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0"
I0403 19:02:30.474445 492847 logs.go:123] Gathering logs for coredns [2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36] ...
I0403 19:02:30.474477 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 19:02:30.565974 492847 logs.go:123] Gathering logs for kindnet [a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4] ...
I0403 19:02:30.566000 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 19:02:30.632109 492847 logs.go:123] Gathering logs for kubelet ...
I0403 19:02:30.632139 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0403 19:02:30.696496 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866658 655 reflector.go:138] object-"kube-system"/"kube-proxy-token-2w58r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2w58r" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.696758 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866762 655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.697009 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866836 655 reflector.go:138] object-"kube-system"/"coredns-token-dlgd7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-dlgd7" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.697235 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866896 655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.697468 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866962 655 reflector.go:138] object-"kube-system"/"kindnet-token-5tz9w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5tz9w" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.698075 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.867057 655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-72p5s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-72p5s" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.698327 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.882691 655 reflector.go:138] object-"default"/"default-token-pflvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pflvq" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.708892 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:52 old-k8s-version-807851 kubelet[655]: E0403 18:56:52.176260 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.713150 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:52 old-k8s-version-807851 kubelet[655]: E0403 18:56:52.321913 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.716941 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:07 old-k8s-version-807851 kubelet[655]: E0403 18:57:07.491831 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.720305 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:18 old-k8s-version-807851 kubelet[655]: E0403 18:57:18.480376 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.721230 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:21 old-k8s-version-807851 kubelet[655]: E0403 18:57:21.487400 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.721779 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:22 old-k8s-version-807851 kubelet[655]: E0403 18:57:22.492577 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.722136 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:23 old-k8s-version-807851 kubelet[655]: E0403 18:57:23.494384 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.722599 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:25 old-k8s-version-807851 kubelet[655]: E0403 18:57:25.512501 655 pod_workers.go:191] Error syncing pod 11226bcd-ff55-42fd-aee7-efbfee400f0d ("storage-provisioner_kube-system(11226bcd-ff55-42fd-aee7-efbfee400f0d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(11226bcd-ff55-42fd-aee7-efbfee400f0d)"
W0403 19:02:30.725754 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:32 old-k8s-version-807851 kubelet[655]: E0403 18:57:32.491653 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.726387 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:37 old-k8s-version-807851 kubelet[655]: E0403 18:57:37.554635 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.726872 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:42 old-k8s-version-807851 kubelet[655]: E0403 18:57:42.438627 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.727082 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:43 old-k8s-version-807851 kubelet[655]: E0403 18:57:43.477336 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.727468 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:53 old-k8s-version-807851 kubelet[655]: E0403 18:57:53.476964 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.727680 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:57 old-k8s-version-807851 kubelet[655]: E0403 18:57:57.477386 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.728372 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:05 old-k8s-version-807851 kubelet[655]: E0403 18:58:05.636989 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.728733 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:12 old-k8s-version-807851 kubelet[655]: E0403 18:58:12.438564 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.728944 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:12 old-k8s-version-807851 kubelet[655]: E0403 18:58:12.477281 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.733381 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:23 old-k8s-version-807851 kubelet[655]: E0403 18:58:23.488466 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.733925 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:24 old-k8s-version-807851 kubelet[655]: E0403 18:58:24.477118 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.734298 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:35 old-k8s-version-807851 kubelet[655]: E0403 18:58:35.476943 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.734484 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:37 old-k8s-version-807851 kubelet[655]: E0403 18:58:37.477387 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.735068 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:48 old-k8s-version-807851 kubelet[655]: E0403 18:58:48.750238 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.735248 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:50 old-k8s-version-807851 kubelet[655]: E0403 18:58:50.478191 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.735570 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:52 old-k8s-version-807851 kubelet[655]: E0403 18:58:52.438533 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.735749 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:04 old-k8s-version-807851 kubelet[655]: E0403 18:59:04.477211 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.736072 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:05 old-k8s-version-807851 kubelet[655]: E0403 18:59:05.476890 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.736389 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:19 old-k8s-version-807851 kubelet[655]: E0403 18:59:19.477686 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.736585 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:19 old-k8s-version-807851 kubelet[655]: E0403 18:59:19.478097 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.736764 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:31 old-k8s-version-807851 kubelet[655]: E0403 18:59:31.477291 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.737319 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:33 old-k8s-version-807851 kubelet[655]: E0403 18:59:33.476857 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.739861 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:46 old-k8s-version-807851 kubelet[655]: E0403 18:59:46.487501 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.740221 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:48 old-k8s-version-807851 kubelet[655]: E0403 18:59:48.477329 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.740472 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:59 old-k8s-version-807851 kubelet[655]: E0403 18:59:59.477203 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.740850 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:01 old-k8s-version-807851 kubelet[655]: E0403 19:00:01.477105 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.741065 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:14 old-k8s-version-807851 kubelet[655]: E0403 19:00:14.477636 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.741747 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:16 old-k8s-version-807851 kubelet[655]: E0403 19:00:16.983011 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.742103 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:22 old-k8s-version-807851 kubelet[655]: E0403 19:00:22.438177 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.742317 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:29 old-k8s-version-807851 kubelet[655]: E0403 19:00:29.477285 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.742674 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:37 old-k8s-version-807851 kubelet[655]: E0403 19:00:37.476888 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.742888 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:43 old-k8s-version-807851 kubelet[655]: E0403 19:00:43.477242 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.743241 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:52 old-k8s-version-807851 kubelet[655]: E0403 19:00:52.481943 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.743450 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:58 old-k8s-version-807851 kubelet[655]: E0403 19:00:58.477202 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.743803 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:04 old-k8s-version-807851 kubelet[655]: E0403 19:01:04.483258 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.744013 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:13 old-k8s-version-807851 kubelet[655]: E0403 19:01:13.477232 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.744371 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:19 old-k8s-version-807851 kubelet[655]: E0403 19:01:19.476987 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.744582 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:24 old-k8s-version-807851 kubelet[655]: E0403 19:01:24.477484 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.744934 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:33 old-k8s-version-807851 kubelet[655]: E0403 19:01:33.477359 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.745144 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:39 old-k8s-version-807851 kubelet[655]: E0403 19:01:39.477333 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.745496 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:47 old-k8s-version-807851 kubelet[655]: E0403 19:01:47.476853 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.745723 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:53 old-k8s-version-807851 kubelet[655]: E0403 19:01:53.477144 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.746075 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: E0403 19:01:58.477092 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.746284 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.746637 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.746848 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.747207 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:27 old-k8s-version-807851 kubelet[655]: E0403 19:02:27.478282 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.750104 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527317 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I0403 19:02:30.750156 492847 logs.go:123] Gathering logs for describe nodes ...
I0403 19:02:30.750186 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0403 19:02:30.967759 492847 logs.go:123] Gathering logs for kube-apiserver [708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf] ...
I0403 19:02:30.967831 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 19:02:31.073795 492847 logs.go:123] Gathering logs for etcd [d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2] ...
I0403 19:02:31.073859 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 19:02:31.139807 492847 logs.go:123] Gathering logs for kube-controller-manager [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e] ...
I0403 19:02:31.139835 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e"
I0403 19:02:31.200749 492847 logs.go:123] Gathering logs for kube-controller-manager [dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426] ...
I0403 19:02:31.200824 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 19:02:31.310778 492847 logs.go:123] Gathering logs for storage-provisioner [0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e] ...
I0403 19:02:31.310816 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e"
I0403 19:02:31.374588 492847 logs.go:123] Gathering logs for kube-apiserver [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a] ...
I0403 19:02:31.374619 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a"
I0403 19:02:31.466572 492847 logs.go:123] Gathering logs for kube-scheduler [5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf] ...
I0403 19:02:31.466663 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 19:02:31.522302 492847 logs.go:123] Gathering logs for kube-proxy [1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8] ...
I0403 19:02:31.522381 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 19:02:31.585157 492847 logs.go:123] Gathering logs for kindnet [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7] ...
I0403 19:02:31.585232 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7"
I0403 19:02:31.645567 492847 logs.go:123] Gathering logs for kubernetes-dashboard [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c] ...
I0403 19:02:31.645651 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c"
I0403 19:02:31.695352 492847 logs.go:123] Gathering logs for containerd ...
I0403 19:02:31.695433 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0403 19:02:31.777283 492847 logs.go:123] Gathering logs for container status ...
I0403 19:02:31.777361 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0403 19:02:31.864284 492847 out.go:358] Setting ErrFile to fd 2...
I0403 19:02:31.864361 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0403 19:02:31.864436 492847 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0403 19:02:31.864482 492847 out.go:270] Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:31.864522 492847 out.go:270] Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:31.864562 492847 out.go:270] Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:31.864606 492847 out.go:270] Apr 03 19:02:27 old-k8s-version-807851 kubelet[655]: E0403 19:02:27.478282 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:02:27 old-k8s-version-807851 kubelet[655]: E0403 19:02:27.478282 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:31.864639 492847 out.go:270] Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527317 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527317 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I0403 19:02:31.864690 492847 out.go:358] Setting ErrFile to fd 2...
I0403 19:02:31.864711 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 19:02:41.869145 492847 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0403 19:02:41.884833 492847 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0403 19:02:41.888027 492847 out.go:201]
W0403 19:02:41.890953 492847 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0403 19:02:41.890999 492847 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0403 19:02:41.891017 492847 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0403 19:02:41.891023 492847 out.go:270] *
*
W0403 19:02:41.891916 492847 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0403 19:02:41.895864 492847 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-807851 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-807851
helpers_test.go:235: (dbg) docker inspect old-k8s-version-807851:
-- stdout --
[
{
"Id": "f291c1d7de730b4c3b49234aebd8f6b88b1405a27630ac6dfd20226f7c745f11",
"Created": "2025-04-03T18:53:14.685340738Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 492979,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-03T18:56:25.39358643Z",
"FinishedAt": "2025-04-03T18:56:24.567616604Z"
},
"Image": "sha256:1a97cd9e9bbab266425b883d3ed87fee4969302ed9a49ce4df4bf460f6bbf404",
"ResolvConfPath": "/var/lib/docker/containers/f291c1d7de730b4c3b49234aebd8f6b88b1405a27630ac6dfd20226f7c745f11/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/f291c1d7de730b4c3b49234aebd8f6b88b1405a27630ac6dfd20226f7c745f11/hostname",
"HostsPath": "/var/lib/docker/containers/f291c1d7de730b4c3b49234aebd8f6b88b1405a27630ac6dfd20226f7c745f11/hosts",
"LogPath": "/var/lib/docker/containers/f291c1d7de730b4c3b49234aebd8f6b88b1405a27630ac6dfd20226f7c745f11/f291c1d7de730b4c3b49234aebd8f6b88b1405a27630ac6dfd20226f7c745f11-json.log",
"Name": "/old-k8s-version-807851",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-807851:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-807851",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "f291c1d7de730b4c3b49234aebd8f6b88b1405a27630ac6dfd20226f7c745f11",
"LowerDir": "/var/lib/docker/overlay2/407125c564d9877dd4ee30d1cb78894488001bdce073c9ff39b04d87ca4abb7f-init/diff:/var/lib/docker/overlay2/f993e0c56445a4465d112340b1cb3dc38281cbf0d0fa8601cbf4e9c619674ac1/diff",
"MergedDir": "/var/lib/docker/overlay2/407125c564d9877dd4ee30d1cb78894488001bdce073c9ff39b04d87ca4abb7f/merged",
"UpperDir": "/var/lib/docker/overlay2/407125c564d9877dd4ee30d1cb78894488001bdce073c9ff39b04d87ca4abb7f/diff",
"WorkDir": "/var/lib/docker/overlay2/407125c564d9877dd4ee30d1cb78894488001bdce073c9ff39b04d87ca4abb7f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-807851",
"Source": "/var/lib/docker/volumes/old-k8s-version-807851/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-807851",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-807851",
"name.minikube.sigs.k8s.io": "old-k8s-version-807851",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "45f8f89f52b0e57560ff20a1e112d4c839c5a69cd96a1fbe753d007e66927f92",
"SandboxKey": "/var/run/docker/netns/45f8f89f52b0",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33429"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33430"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33433"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33431"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33432"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-807851": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "c2:9c:1e:2f:7a:da",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "42beabe9b8f7a0f2e1963e559e7d089848ef4449a3e23f8e91984e09acaa9792",
"EndpointID": "b99cb1634b81fca6ec82dbbd4725dbbefcf0fdb3808df8a690925516b309ec70",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-807851",
"f291c1d7de73"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-807851 -n old-k8s-version-807851
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-807851 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-807851 logs -n 25: (2.312872623s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| delete | -p force-systemd-flag-103903 | force-systemd-flag-103903 | jenkins | v1.35.0 | 03 Apr 25 18:52 UTC | 03 Apr 25 18:52 UTC |
| start | -p cert-options-499613 | cert-options-499613 | jenkins | v1.35.0 | 03 Apr 25 18:52 UTC | 03 Apr 25 18:53 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-499613 ssh | cert-options-499613 | jenkins | v1.35.0 | 03 Apr 25 18:53 UTC | 03 Apr 25 18:53 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-499613 -- sudo | cert-options-499613 | jenkins | v1.35.0 | 03 Apr 25 18:53 UTC | 03 Apr 25 18:53 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-499613 | cert-options-499613 | jenkins | v1.35.0 | 03 Apr 25 18:53 UTC | 03 Apr 25 18:53 UTC |
| start | -p old-k8s-version-807851 | old-k8s-version-807851 | jenkins | v1.35.0 | 03 Apr 25 18:53 UTC | 03 Apr 25 18:56 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-517821 | cert-expiration-517821 | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-517821 | cert-expiration-517821 | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
| start | -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:56 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-807851 | old-k8s-version-807851 | jenkins | v1.35.0 | 03 Apr 25 18:56 UTC | 03 Apr 25 18:56 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-807851 | old-k8s-version-807851 | jenkins | v1.35.0 | 03 Apr 25 18:56 UTC | 03 Apr 25 18:56 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-807851 | old-k8s-version-807851 | jenkins | v1.35.0 | 03 Apr 25 18:56 UTC | 03 Apr 25 18:56 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-807851 | old-k8s-version-807851 | jenkins | v1.35.0 | 03 Apr 25 18:56 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 18:56 UTC | 03 Apr 25 18:56 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 18:56 UTC | 03 Apr 25 18:56 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 18:56 UTC | 03 Apr 25 18:56 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 18:56 UTC | 03 Apr 25 19:01 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| image | no-preload-734293 image list | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC | 03 Apr 25 19:01 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC | 03 Apr 25 19:01 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC | 03 Apr 25 19:01 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC | 03 Apr 25 19:01 UTC |
| delete | -p no-preload-734293 | no-preload-734293 | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC | 03 Apr 25 19:01 UTC |
| start | -p embed-certs-991162 | embed-certs-991162 | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC | 03 Apr 25 19:02 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p embed-certs-991162 | embed-certs-991162 | jenkins | v1.35.0 | 03 Apr 25 19:02 UTC | 03 Apr 25 19:02 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p embed-certs-991162 | embed-certs-991162 | jenkins | v1.35.0 | 03 Apr 25 19:02 UTC | |
| | --alsologtostderr -v=3 | | | | | |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/03 19:01:30
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0403 19:01:30.309900 502372 out.go:345] Setting OutFile to fd 1 ...
I0403 19:01:30.310068 502372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 19:01:30.310098 502372 out.go:358] Setting ErrFile to fd 2...
I0403 19:01:30.310120 502372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 19:01:30.310381 502372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-279421/.minikube/bin
I0403 19:01:30.310823 502372 out.go:352] Setting JSON to false
I0403 19:01:30.311833 502372 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9842,"bootTime":1743697049,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0403 19:01:30.311930 502372 start.go:139] virtualization:
I0403 19:01:30.318252 502372 out.go:177] * [embed-certs-991162] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0403 19:01:30.322682 502372 out.go:177] - MINIKUBE_LOCATION=20591
I0403 19:01:30.322743 502372 notify.go:220] Checking for updates...
I0403 19:01:30.329231 502372 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0403 19:01:30.332395 502372 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20591-279421/kubeconfig
I0403 19:01:30.335441 502372 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-279421/.minikube
I0403 19:01:30.338354 502372 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0403 19:01:30.341399 502372 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0403 19:01:30.345837 502372 config.go:182] Loaded profile config "old-k8s-version-807851": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0403 19:01:30.345939 502372 driver.go:394] Setting default libvirt URI to qemu:///system
I0403 19:01:30.369100 502372 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0403 19:01:30.369230 502372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0403 19:01:30.428086 502372 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-03 19:01:30.418908652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0403 19:01:30.428192 502372 docker.go:318] overlay module found
I0403 19:01:30.431282 502372 out.go:177] * Using the docker driver based on user configuration
I0403 19:01:30.434193 502372 start.go:297] selected driver: docker
I0403 19:01:30.434215 502372 start.go:901] validating driver "docker" against <nil>
I0403 19:01:30.434230 502372 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0403 19:01:30.434958 502372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0403 19:01:30.502593 502372 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-03 19:01:30.488232774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0403 19:01:30.502746 502372 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0403 19:01:30.502973 502372 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0403 19:01:30.505968 502372 out.go:177] * Using Docker driver with root privileges
I0403 19:01:30.508909 502372 cni.go:84] Creating CNI manager for ""
I0403 19:01:30.508988 502372 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0403 19:01:30.509001 502372 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0403 19:01:30.509086 502372 start.go:340] cluster config:
{Name:embed-certs-991162 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-991162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0403 19:01:30.512180 502372 out.go:177] * Starting "embed-certs-991162" primary control-plane node in "embed-certs-991162" cluster
I0403 19:01:30.515104 502372 cache.go:121] Beginning downloading kic base image for docker with containerd
I0403 19:01:30.518116 502372 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
I0403 19:01:30.520973 502372 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0403 19:01:30.521039 502372 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-279421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
I0403 19:01:30.521051 502372 cache.go:56] Caching tarball of preloaded images
I0403 19:01:30.521061 502372 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
I0403 19:01:30.521159 502372 preload.go:172] Found /home/jenkins/minikube-integration/20591-279421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0403 19:01:30.521170 502372 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0403 19:01:30.521275 502372 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/config.json ...
I0403 19:01:30.521297 502372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/config.json: {Name:mk1481edb7e6ec83ea7c3943ce05bf387cfbcdee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:01:30.541306 502372 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
I0403 19:01:30.541336 502372 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
I0403 19:01:30.541351 502372 cache.go:230] Successfully downloaded all kic artifacts
I0403 19:01:30.541380 502372 start.go:360] acquireMachinesLock for embed-certs-991162: {Name:mk161b513b2dbfb6483c96eede05d306007a571d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0403 19:01:30.541491 502372 start.go:364] duration metric: took 90.339µs to acquireMachinesLock for "embed-certs-991162"
I0403 19:01:30.541521 502372 start.go:93] Provisioning new machine with config: &{Name:embed-certs-991162 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-991162 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0403 19:01:30.541601 502372 start.go:125] createHost starting for "" (driver="docker")
I0403 19:01:31.510585 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:34.008635 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:30.545318 502372 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0403 19:01:30.545567 502372 start.go:159] libmachine.API.Create for "embed-certs-991162" (driver="docker")
I0403 19:01:30.545619 502372 client.go:168] LocalClient.Create starting
I0403 19:01:30.545937 502372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem
I0403 19:01:30.546010 502372 main.go:141] libmachine: Decoding PEM data...
I0403 19:01:30.546048 502372 main.go:141] libmachine: Parsing certificate...
I0403 19:01:30.546112 502372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-279421/.minikube/certs/cert.pem
I0403 19:01:30.546137 502372 main.go:141] libmachine: Decoding PEM data...
I0403 19:01:30.546148 502372 main.go:141] libmachine: Parsing certificate...
I0403 19:01:30.546538 502372 cli_runner.go:164] Run: docker network inspect embed-certs-991162 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0403 19:01:30.568049 502372 cli_runner.go:211] docker network inspect embed-certs-991162 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0403 19:01:30.568142 502372 network_create.go:284] running [docker network inspect embed-certs-991162] to gather additional debugging logs...
I0403 19:01:30.568202 502372 cli_runner.go:164] Run: docker network inspect embed-certs-991162
W0403 19:01:30.584896 502372 cli_runner.go:211] docker network inspect embed-certs-991162 returned with exit code 1
I0403 19:01:30.584928 502372 network_create.go:287] error running [docker network inspect embed-certs-991162]: docker network inspect embed-certs-991162: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-991162 not found
I0403 19:01:30.584981 502372 network_create.go:289] output of [docker network inspect embed-certs-991162]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-991162 not found
** /stderr **
I0403 19:01:30.585091 502372 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0403 19:01:30.603249 502372 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d1653d9cc329 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:7a:0d:fe:b5:6a} reservation:<nil>}
I0403 19:01:30.603556 502372 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-20b339f54b46 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:2d:87:97:c9:a0} reservation:<nil>}
I0403 19:01:30.604016 502372 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aaea8061051a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:29:d3:a7:40:fd} reservation:<nil>}
I0403 19:01:30.604333 502372 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-42beabe9b8f7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:f6:70:40:54:38:3d} reservation:<nil>}
I0403 19:01:30.604789 502372 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3530}
I0403 19:01:30.604811 502372 network_create.go:124] attempt to create docker network embed-certs-991162 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0403 19:01:30.604879 502372 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-991162 embed-certs-991162
I0403 19:01:30.670517 502372 network_create.go:108] docker network embed-certs-991162 192.168.85.0/24 created
I0403 19:01:30.670553 502372 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-991162" container
I0403 19:01:30.670646 502372 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0403 19:01:30.686766 502372 cli_runner.go:164] Run: docker volume create embed-certs-991162 --label name.minikube.sigs.k8s.io=embed-certs-991162 --label created_by.minikube.sigs.k8s.io=true
I0403 19:01:30.704299 502372 oci.go:103] Successfully created a docker volume embed-certs-991162
I0403 19:01:30.704383 502372 cli_runner.go:164] Run: docker run --rm --name embed-certs-991162-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-991162 --entrypoint /usr/bin/test -v embed-certs-991162:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib
I0403 19:01:31.284178 502372 oci.go:107] Successfully prepared a docker volume embed-certs-991162
I0403 19:01:31.284231 502372 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0403 19:01:31.284250 502372 kic.go:194] Starting extracting preloaded images to volume ...
I0403 19:01:31.284350 502372 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20591-279421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-991162:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir
I0403 19:01:35.934550 502372 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20591-279421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-991162:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir: (4.650160601s)
I0403 19:01:35.934579 502372 kic.go:203] duration metric: took 4.650325632s to extract preloaded images to volume ...
W0403 19:01:35.934715 502372 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0403 19:01:35.934838 502372 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0403 19:01:36.013550 502372 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-991162 --name embed-certs-991162 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-991162 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-991162 --network embed-certs-991162 --ip 192.168.85.2 --volume embed-certs-991162:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727
I0403 19:01:36.330131 502372 cli_runner.go:164] Run: docker container inspect embed-certs-991162 --format={{.State.Running}}
I0403 19:01:36.349361 502372 cli_runner.go:164] Run: docker container inspect embed-certs-991162 --format={{.State.Status}}
I0403 19:01:36.375647 502372 cli_runner.go:164] Run: docker exec embed-certs-991162 stat /var/lib/dpkg/alternatives/iptables
I0403 19:01:36.424244 502372 oci.go:144] the created container "embed-certs-991162" has a running status.
I0403 19:01:36.424276 502372 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa...
I0403 19:01:36.992516 502372 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0403 19:01:37.028509 502372 cli_runner.go:164] Run: docker container inspect embed-certs-991162 --format={{.State.Status}}
I0403 19:01:37.056756 502372 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0403 19:01:37.056780 502372 kic_runner.go:114] Args: [docker exec --privileged embed-certs-991162 chown docker:docker /home/docker/.ssh/authorized_keys]
I0403 19:01:37.114848 502372 cli_runner.go:164] Run: docker container inspect embed-certs-991162 --format={{.State.Status}}
I0403 19:01:37.138227 502372 machine.go:93] provisionDockerMachine start ...
I0403 19:01:37.138403 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:01:37.163569 502372 main.go:141] libmachine: Using SSH client type: native
I0403 19:01:37.163898 502372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33439 <nil> <nil>}
I0403 19:01:37.163907 502372 main.go:141] libmachine: About to run SSH command:
hostname
I0403 19:01:37.325053 502372 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-991162
I0403 19:01:37.325076 502372 ubuntu.go:169] provisioning hostname "embed-certs-991162"
I0403 19:01:37.325147 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:01:37.347930 502372 main.go:141] libmachine: Using SSH client type: native
I0403 19:01:37.348239 502372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33439 <nil> <nil>}
I0403 19:01:37.348271 502372 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-991162 && echo "embed-certs-991162" | sudo tee /etc/hostname
I0403 19:01:37.493140 502372 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-991162
I0403 19:01:37.493221 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:01:37.522310 502372 main.go:141] libmachine: Using SSH client type: native
I0403 19:01:37.522628 502372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33439 <nil> <nil>}
I0403 19:01:37.522655 502372 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-991162' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-991162/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-991162' | sudo tee -a /etc/hosts;
fi
fi
I0403 19:01:37.654176 502372 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0403 19:01:37.654250 502372 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20591-279421/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-279421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-279421/.minikube}
I0403 19:01:37.654295 502372 ubuntu.go:177] setting up certificates
I0403 19:01:37.654350 502372 provision.go:84] configureAuth start
I0403 19:01:37.654457 502372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-991162
I0403 19:01:37.673264 502372 provision.go:143] copyHostCerts
I0403 19:01:37.673332 502372 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-279421/.minikube/ca.pem, removing ...
I0403 19:01:37.673342 502372 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-279421/.minikube/ca.pem
I0403 19:01:37.673419 502372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-279421/.minikube/ca.pem (1078 bytes)
I0403 19:01:37.673555 502372 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-279421/.minikube/cert.pem, removing ...
I0403 19:01:37.673568 502372 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-279421/.minikube/cert.pem
I0403 19:01:37.673602 502372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-279421/.minikube/cert.pem (1123 bytes)
I0403 19:01:37.673817 502372 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-279421/.minikube/key.pem, removing ...
I0403 19:01:37.673830 502372 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-279421/.minikube/key.pem
I0403 19:01:37.673866 502372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-279421/.minikube/key.pem (1675 bytes)
I0403 19:01:37.673932 502372 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-279421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca-key.pem org=jenkins.embed-certs-991162 san=[127.0.0.1 192.168.85.2 embed-certs-991162 localhost minikube]
I0403 19:01:38.236753 502372 provision.go:177] copyRemoteCerts
I0403 19:01:38.236870 502372 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0403 19:01:38.236957 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:01:38.265999 502372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa Username:docker}
I0403 19:01:38.358548 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0403 19:01:38.384070 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0403 19:01:38.409300 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0403 19:01:38.433599 502372 provision.go:87] duration metric: took 779.221934ms to configureAuth
I0403 19:01:38.433624 502372 ubuntu.go:193] setting minikube options for container-runtime
I0403 19:01:38.433833 502372 config.go:182] Loaded profile config "embed-certs-991162": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 19:01:38.433850 502372 machine.go:96] duration metric: took 1.29560534s to provisionDockerMachine
I0403 19:01:38.433857 502372 client.go:171] duration metric: took 7.888227036s to LocalClient.Create
I0403 19:01:38.433873 502372 start.go:167] duration metric: took 7.888308367s to libmachine.API.Create "embed-certs-991162"
I0403 19:01:38.433887 502372 start.go:293] postStartSetup for "embed-certs-991162" (driver="docker")
I0403 19:01:38.433898 502372 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0403 19:01:38.433949 502372 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0403 19:01:38.434002 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:01:38.453537 502372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa Username:docker}
I0403 19:01:38.543160 502372 ssh_runner.go:195] Run: cat /etc/os-release
I0403 19:01:38.546588 502372 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0403 19:01:38.546623 502372 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0403 19:01:38.546634 502372 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0403 19:01:38.546642 502372 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0403 19:01:38.546652 502372 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-279421/.minikube/addons for local assets ...
I0403 19:01:38.546709 502372 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-279421/.minikube/files for local assets ...
I0403 19:01:38.546797 502372 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-279421/.minikube/files/etc/ssl/certs/2848032.pem -> 2848032.pem in /etc/ssl/certs
I0403 19:01:38.546903 502372 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0403 19:01:38.555873 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/files/etc/ssl/certs/2848032.pem --> /etc/ssl/certs/2848032.pem (1708 bytes)
I0403 19:01:38.585426 502372 start.go:296] duration metric: took 151.522142ms for postStartSetup
I0403 19:01:38.585878 502372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-991162
I0403 19:01:38.603868 502372 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/config.json ...
I0403 19:01:38.604160 502372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0403 19:01:38.604212 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:01:38.621730 502372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa Username:docker}
I0403 19:01:38.706402 502372 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0403 19:01:38.710585 502372 start.go:128] duration metric: took 8.168969387s to createHost
I0403 19:01:38.710611 502372 start.go:83] releasing machines lock for "embed-certs-991162", held for 8.169106069s
I0403 19:01:38.710700 502372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-991162
I0403 19:01:38.728367 502372 ssh_runner.go:195] Run: cat /version.json
I0403 19:01:38.728427 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:01:38.728675 502372 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0403 19:01:38.728743 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:01:38.751063 502372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa Username:docker}
I0403 19:01:38.757794 502372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa Username:docker}
I0403 19:01:38.969793 502372 ssh_runner.go:195] Run: systemctl --version
I0403 19:01:38.974235 502372 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0403 19:01:38.978472 502372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0403 19:01:39.003426 502372 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0403 19:01:39.003542 502372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0403 19:01:39.039890 502372 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0403 19:01:39.039911 502372 start.go:495] detecting cgroup driver to use...
I0403 19:01:39.039944 502372 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0403 19:01:39.040010 502372 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0403 19:01:39.054571 502372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0403 19:01:39.066450 502372 docker.go:217] disabling cri-docker service (if available) ...
I0403 19:01:39.066567 502372 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0403 19:01:39.080162 502372 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0403 19:01:39.095228 502372 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0403 19:01:39.191156 502372 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0403 19:01:39.289175 502372 docker.go:233] disabling docker service ...
I0403 19:01:39.289282 502372 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0403 19:01:39.311141 502372 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0403 19:01:39.323904 502372 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0403 19:01:39.420305 502372 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0403 19:01:39.521029 502372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0403 19:01:39.532497 502372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0403 19:01:39.550063 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0403 19:01:39.560425 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0403 19:01:39.572326 502372 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0403 19:01:39.572450 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0403 19:01:39.582881 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0403 19:01:39.593843 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0403 19:01:39.604541 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0403 19:01:39.615115 502372 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0403 19:01:39.627979 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0403 19:01:39.638585 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0403 19:01:39.649435 502372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0403 19:01:39.660246 502372 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0403 19:01:39.668776 502372 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0403 19:01:39.677503 502372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0403 19:01:39.770166 502372 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0403 19:01:39.906807 502372 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0403 19:01:39.906878 502372 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0403 19:01:39.910539 502372 start.go:563] Will wait 60s for crictl version
I0403 19:01:39.910609 502372 ssh_runner.go:195] Run: which crictl
I0403 19:01:39.914267 502372 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0403 19:01:39.962039 502372 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0403 19:01:39.962160 502372 ssh_runner.go:195] Run: containerd --version
I0403 19:01:39.985108 502372 ssh_runner.go:195] Run: containerd --version
I0403 19:01:40.020415 502372 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.27 ...
I0403 19:01:36.010209 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:38.507438 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:40.023476 502372 cli_runner.go:164] Run: docker network inspect embed-certs-991162 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0403 19:01:40.041937 502372 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0403 19:01:40.046050 502372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0403 19:01:40.059967 502372 kubeadm.go:883] updating cluster {Name:embed-certs-991162 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-991162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0403 19:01:40.060097 502372 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0403 19:01:40.060161 502372 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 19:01:40.100699 502372 containerd.go:627] all images are preloaded for containerd runtime.
I0403 19:01:40.100723 502372 containerd.go:534] Images already preloaded, skipping extraction
I0403 19:01:40.100793 502372 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 19:01:40.142665 502372 containerd.go:627] all images are preloaded for containerd runtime.
I0403 19:01:40.142689 502372 cache_images.go:84] Images are preloaded, skipping loading
I0403 19:01:40.142698 502372 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
I0403 19:01:40.142838 502372 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-991162 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-991162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0403 19:01:40.142914 502372 ssh_runner.go:195] Run: sudo crictl info
I0403 19:01:40.189268 502372 cni.go:84] Creating CNI manager for ""
I0403 19:01:40.189288 502372 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0403 19:01:40.189299 502372 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0403 19:01:40.189322 502372 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-991162 NodeName:embed-certs-991162 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0403 19:01:40.189435 502372 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-991162"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0403 19:01:40.189507 502372 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0403 19:01:40.200471 502372 binaries.go:44] Found k8s binaries, skipping transfer
I0403 19:01:40.200563 502372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0403 19:01:40.214611 502372 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0403 19:01:40.234899 502372 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0403 19:01:40.254953 502372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0403 19:01:40.275364 502372 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0403 19:01:40.279998 502372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0403 19:01:40.291635 502372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0403 19:01:40.405374 502372 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0403 19:01:40.420603 502372 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162 for IP: 192.168.85.2
I0403 19:01:40.420674 502372 certs.go:194] generating shared ca certs ...
I0403 19:01:40.420706 502372 certs.go:226] acquiring lock for ca certs: {Name:mkbf9d260d0fbb63852ed66b616dcb8dddc3fa66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:01:40.420884 502372 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-279421/.minikube/ca.key
I0403 19:01:40.420961 502372 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-279421/.minikube/proxy-client-ca.key
I0403 19:01:40.421001 502372 certs.go:256] generating profile certs ...
I0403 19:01:40.421109 502372 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/client.key
I0403 19:01:40.421142 502372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/client.crt with IP's: []
I0403 19:01:40.704771 502372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/client.crt ...
I0403 19:01:40.704805 502372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/client.crt: {Name:mk1c3d1f9ad45a7140cd6c5aac1d53aed4ff245e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:01:40.705009 502372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/client.key ...
I0403 19:01:40.705023 502372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/client.key: {Name:mk845fbc58ad572381940742091e00f9a2c69831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:01:40.705788 502372 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.key.6f4df3dd
I0403 19:01:40.705810 502372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.crt.6f4df3dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0403 19:01:41.273030 502372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.crt.6f4df3dd ...
I0403 19:01:41.273063 502372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.crt.6f4df3dd: {Name:mke928d17cb46c8258b871dd80d970ce394f4737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:01:41.274545 502372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.key.6f4df3dd ...
I0403 19:01:41.274569 502372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.key.6f4df3dd: {Name:mke8b8747549b14ec90e5bb53cfc0d0fd91f670c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:01:41.274663 502372 certs.go:381] copying /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.crt.6f4df3dd -> /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.crt
I0403 19:01:41.274752 502372 certs.go:385] copying /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.key.6f4df3dd -> /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.key
I0403 19:01:41.274814 502372 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/proxy-client.key
I0403 19:01:41.274834 502372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/proxy-client.crt with IP's: []
I0403 19:01:42.751770 502372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/proxy-client.crt ...
I0403 19:01:42.751803 502372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/proxy-client.crt: {Name:mkebdc6b3764eefc4442d34231f8fc0b8a2a4357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:01:42.752765 502372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/proxy-client.key ...
I0403 19:01:42.752791 502372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/proxy-client.key: {Name:mke2350cf89f8bb67547a5a5515be2e1f0752fe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:01:42.753166 502372 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/284803.pem (1338 bytes)
W0403 19:01:42.753245 502372 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-279421/.minikube/certs/284803_empty.pem, impossibly tiny 0 bytes
I0403 19:01:42.753261 502372 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca-key.pem (1675 bytes)
I0403 19:01:42.753312 502372 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/ca.pem (1078 bytes)
I0403 19:01:42.753374 502372 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/cert.pem (1123 bytes)
I0403 19:01:42.753411 502372 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/certs/key.pem (1675 bytes)
I0403 19:01:42.753529 502372 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-279421/.minikube/files/etc/ssl/certs/2848032.pem (1708 bytes)
I0403 19:01:42.754452 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0403 19:01:42.798023 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0403 19:01:42.826970 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0403 19:01:42.854479 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0403 19:01:42.879181 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0403 19:01:42.905196 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0403 19:01:42.929928 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0403 19:01:42.955057 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/profiles/embed-certs-991162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0403 19:01:42.980120 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0403 19:01:43.009206 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/certs/284803.pem --> /usr/share/ca-certificates/284803.pem (1338 bytes)
I0403 19:01:43.036023 502372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-279421/.minikube/files/etc/ssl/certs/2848032.pem --> /usr/share/ca-certificates/2848032.pem (1708 bytes)
I0403 19:01:43.062110 502372 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0403 19:01:43.081777 502372 ssh_runner.go:195] Run: openssl version
I0403 19:01:43.090630 502372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0403 19:01:43.101076 502372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0403 19:01:43.104648 502372 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 3 18:12 /usr/share/ca-certificates/minikubeCA.pem
I0403 19:01:43.104764 502372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0403 19:01:43.112606 502372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0403 19:01:43.122420 502372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284803.pem && ln -fs /usr/share/ca-certificates/284803.pem /etc/ssl/certs/284803.pem"
I0403 19:01:43.131927 502372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284803.pem
I0403 19:01:43.135692 502372 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 3 18:19 /usr/share/ca-certificates/284803.pem
I0403 19:01:43.135790 502372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284803.pem
I0403 19:01:43.143128 502372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284803.pem /etc/ssl/certs/51391683.0"
I0403 19:01:43.153275 502372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2848032.pem && ln -fs /usr/share/ca-certificates/2848032.pem /etc/ssl/certs/2848032.pem"
I0403 19:01:43.162611 502372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2848032.pem
I0403 19:01:43.166155 502372 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 3 18:19 /usr/share/ca-certificates/2848032.pem
I0403 19:01:43.166225 502372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2848032.pem
I0403 19:01:43.173534 502372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2848032.pem /etc/ssl/certs/3ec20f2e.0"
I0403 19:01:43.183357 502372 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0403 19:01:43.186901 502372 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0403 19:01:43.186974 502372 kubeadm.go:392] StartCluster: {Name:embed-certs-991162 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-991162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0403 19:01:43.187055 502372 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0403 19:01:43.187129 502372 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0403 19:01:43.230598 502372 cri.go:89] found id: ""
I0403 19:01:43.230721 502372 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0403 19:01:43.239676 502372 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0403 19:01:43.249033 502372 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0403 19:01:43.249128 502372 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0403 19:01:43.258121 502372 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0403 19:01:43.258143 502372 kubeadm.go:157] found existing configuration files:
I0403 19:01:43.258196 502372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0403 19:01:43.266769 502372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0403 19:01:43.266888 502372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0403 19:01:43.275281 502372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0403 19:01:43.284227 502372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0403 19:01:43.284295 502372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0403 19:01:43.292753 502372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0403 19:01:43.301541 502372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0403 19:01:43.301632 502372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0403 19:01:43.310348 502372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0403 19:01:43.319692 502372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0403 19:01:43.319828 502372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0403 19:01:43.328347 502372 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0403 19:01:43.372040 502372 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0403 19:01:43.372424 502372 kubeadm.go:310] [preflight] Running pre-flight checks
I0403 19:01:43.407747 502372 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0403 19:01:43.407858 502372 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1081-aws[0m
I0403 19:01:43.407919 502372 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0403 19:01:43.407996 502372 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0403 19:01:43.408066 502372 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0403 19:01:43.408141 502372 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0403 19:01:43.408212 502372 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0403 19:01:43.408281 502372 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0403 19:01:43.408364 502372 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0403 19:01:43.408432 502372 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0403 19:01:43.408506 502372 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0403 19:01:43.408577 502372 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0403 19:01:43.490165 502372 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0403 19:01:43.490332 502372 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0403 19:01:43.490452 502372 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0403 19:01:43.495841 502372 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0403 19:01:40.510062 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:43.009733 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:45.011783 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:43.502143 502372 out.go:235] - Generating certificates and keys ...
I0403 19:01:43.502323 502372 kubeadm.go:310] [certs] Using existing ca certificate authority
I0403 19:01:43.502427 502372 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0403 19:01:43.778190 502372 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0403 19:01:44.040359 502372 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0403 19:01:45.144558 502372 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0403 19:01:47.508770 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:50.009368 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:45.759089 502372 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0403 19:01:46.217231 502372 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0403 19:01:46.217512 502372 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-991162 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0403 19:01:46.459349 502372 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0403 19:01:46.459635 502372 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-991162 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0403 19:01:46.680654 502372 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0403 19:01:47.178317 502372 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0403 19:01:47.652524 502372 kubeadm.go:310] [certs] Generating "sa" key and public key
I0403 19:01:47.652814 502372 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0403 19:01:48.594233 502372 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0403 19:01:48.986458 502372 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0403 19:01:49.583211 502372 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0403 19:01:49.822947 502372 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0403 19:01:50.185999 502372 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0403 19:01:50.187394 502372 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0403 19:01:50.189849 502372 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0403 19:01:50.193244 502372 out.go:235] - Booting up control plane ...
I0403 19:01:50.193355 502372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0403 19:01:50.193440 502372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0403 19:01:50.194593 502372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0403 19:01:50.221801 502372 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0403 19:01:50.228588 502372 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0403 19:01:50.228869 502372 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0403 19:01:52.009507 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:54.014846 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:50.328567 502372 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0403 19:01:50.328686 502372 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0403 19:01:51.830065 502372 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.50182502s
I0403 19:01:51.830152 502372 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0403 19:01:57.832370 502372 kubeadm.go:310] [api-check] The API server is healthy after 6.002249551s
I0403 19:01:57.852715 502372 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0403 19:01:57.869853 502372 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0403 19:01:57.897018 502372 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0403 19:01:57.897217 502372 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-991162 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0403 19:01:57.908231 502372 kubeadm.go:310] [bootstrap-token] Using token: s0ahr0.zh08motnldpc3moi
I0403 19:01:57.913291 502372 out.go:235] - Configuring RBAC rules ...
I0403 19:01:57.913425 502372 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0403 19:01:57.915717 502372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0403 19:01:57.923551 502372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0403 19:01:57.927898 502372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0403 19:01:57.934219 502372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0403 19:01:57.938400 502372 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0403 19:01:58.239483 502372 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0403 19:01:58.662109 502372 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0403 19:01:59.241706 502372 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0403 19:01:59.242841 502372 kubeadm.go:310]
I0403 19:01:59.242917 502372 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0403 19:01:59.242945 502372 kubeadm.go:310]
I0403 19:01:59.243033 502372 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0403 19:01:59.243043 502372 kubeadm.go:310]
I0403 19:01:59.243069 502372 kubeadm.go:310] mkdir -p $HOME/.kube
I0403 19:01:59.243134 502372 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0403 19:01:59.243187 502372 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0403 19:01:59.243197 502372 kubeadm.go:310]
I0403 19:01:59.243250 502372 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0403 19:01:59.243258 502372 kubeadm.go:310]
I0403 19:01:59.243306 502372 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0403 19:01:59.243314 502372 kubeadm.go:310]
I0403 19:01:59.243366 502372 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0403 19:01:59.243442 502372 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0403 19:01:59.243513 502372 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0403 19:01:59.243521 502372 kubeadm.go:310]
I0403 19:01:59.243606 502372 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0403 19:01:59.243685 502372 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0403 19:01:59.243693 502372 kubeadm.go:310]
I0403 19:01:59.243777 502372 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s0ahr0.zh08motnldpc3moi \
I0403 19:01:59.243882 502372 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:6f49c83739bd77ca3bcf801a072c85ddf3f4d048525d93ed13a794d8f4d5ae6b \
I0403 19:01:59.243908 502372 kubeadm.go:310] --control-plane
I0403 19:01:59.243917 502372 kubeadm.go:310]
I0403 19:01:59.244001 502372 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0403 19:01:59.244009 502372 kubeadm.go:310]
I0403 19:01:59.244090 502372 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s0ahr0.zh08motnldpc3moi \
I0403 19:01:59.244195 502372 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:6f49c83739bd77ca3bcf801a072c85ddf3f4d048525d93ed13a794d8f4d5ae6b
I0403 19:01:59.249316 502372 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0403 19:01:59.249542 502372 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1081-aws\n", err: exit status 1
I0403 19:01:59.249687 502372 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0403 19:01:59.249712 502372 cni.go:84] Creating CNI manager for ""
I0403 19:01:59.249734 502372 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0403 19:01:59.254722 502372 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0403 19:01:56.507276 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:58.512189 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:01:59.257830 502372 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0403 19:01:59.261766 502372 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
I0403 19:01:59.261785 502372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I0403 19:01:59.283660 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0403 19:01:59.596876 502372 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0403 19:01:59.597028 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:01:59.597128 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-991162 minikube.k8s.io/updated_at=2025_04_03T19_01_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053 minikube.k8s.io/name=embed-certs-991162 minikube.k8s.io/primary=true
I0403 19:01:59.779234 502372 ops.go:34] apiserver oom_adj: -16
I0403 19:01:59.779387 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:00.279528 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:00.779508 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:01.279896 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:01.779516 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:02.280327 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:02.779934 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:03.279525 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:03.779524 502372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 19:02:03.926095 502372 kubeadm.go:1113] duration metric: took 4.329115417s to wait for elevateKubeSystemPrivileges
I0403 19:02:03.926125 502372 kubeadm.go:394] duration metric: took 20.7391545s to StartCluster
I0403 19:02:03.926146 502372 settings.go:142] acquiring lock: {Name:mkda4ef6aa45ba7450baec7632aaddbe8adae188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:02:03.926209 502372 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20591-279421/kubeconfig
I0403 19:02:03.927537 502372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-279421/kubeconfig: {Name:mkd56fac60608d6ef399d7920f9889f463e24d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0403 19:02:03.927761 502372 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0403 19:02:03.927877 502372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0403 19:02:03.928131 502372 config.go:182] Loaded profile config "embed-certs-991162": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 19:02:03.928098 502372 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0403 19:02:03.928180 502372 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-991162"
I0403 19:02:03.928204 502372 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-991162"
I0403 19:02:03.928229 502372 host.go:66] Checking if "embed-certs-991162" exists ...
I0403 19:02:03.928249 502372 addons.go:69] Setting default-storageclass=true in profile "embed-certs-991162"
I0403 19:02:03.928266 502372 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-991162"
I0403 19:02:03.928565 502372 cli_runner.go:164] Run: docker container inspect embed-certs-991162 --format={{.State.Status}}
I0403 19:02:03.928694 502372 cli_runner.go:164] Run: docker container inspect embed-certs-991162 --format={{.State.Status}}
I0403 19:02:03.931966 502372 out.go:177] * Verifying Kubernetes components...
I0403 19:02:03.937289 502372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0403 19:02:03.960677 502372 addons.go:238] Setting addon default-storageclass=true in "embed-certs-991162"
I0403 19:02:03.960721 502372 host.go:66] Checking if "embed-certs-991162" exists ...
I0403 19:02:03.961141 502372 cli_runner.go:164] Run: docker container inspect embed-certs-991162 --format={{.State.Status}}
I0403 19:02:03.977506 502372 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0403 19:02:01.008389 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:03.009414 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:03.985308 502372 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0403 19:02:03.985331 502372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0403 19:02:03.985396 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:02:03.998926 502372 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0403 19:02:03.998953 502372 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0403 19:02:03.999026 502372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-991162
I0403 19:02:04.021483 502372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa Username:docker}
I0403 19:02:04.049063 502372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/20591-279421/.minikube/machines/embed-certs-991162/id_rsa Username:docker}
I0403 19:02:04.223829 502372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0403 19:02:04.223943 502372 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0403 19:02:04.268271 502372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0403 19:02:04.299516 502372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0403 19:02:04.929418 502372 node_ready.go:35] waiting up to 6m0s for node "embed-certs-991162" to be "Ready" ...
I0403 19:02:04.929773 502372 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
I0403 19:02:04.963789 502372 node_ready.go:49] node "embed-certs-991162" has status "Ready":"True"
I0403 19:02:04.963811 502372 node_ready.go:38] duration metric: took 34.362222ms for node "embed-certs-991162" to be "Ready" ...
I0403 19:02:04.963821 502372 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0403 19:02:04.975212 502372 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pdhcq" in "kube-system" namespace to be "Ready" ...
I0403 19:02:05.225206 502372 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0403 19:02:05.228612 502372 addons.go:514] duration metric: took 1.300489192s for enable addons: enabled=[default-storageclass storage-provisioner]
I0403 19:02:05.507572 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:08.009585 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:10.012914 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:05.433747 502372 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-991162" context rescaled to 1 replicas
I0403 19:02:05.978355 502372 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-pdhcq" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-pdhcq" not found
I0403 19:02:05.978431 502372 pod_ready.go:82] duration metric: took 1.003189545s for pod "coredns-668d6bf9bc-pdhcq" in "kube-system" namespace to be "Ready" ...
E0403 19:02:05.978458 502372 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-pdhcq" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-pdhcq" not found
I0403 19:02:05.978482 502372 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace to be "Ready" ...
I0403 19:02:07.984475 502372 pod_ready.go:103] pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:09.984739 502372 pod_ready.go:103] pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:12.587566 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:15.016096 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:12.484334 502372 pod_ready.go:103] pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:14.484509 502372 pod_ready.go:103] pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:17.507032 492847 pod_ready.go:103] pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:18.002529 492847 pod_ready.go:82] duration metric: took 4m0.000807184s for pod "metrics-server-9975d5f86-xfpl4" in "kube-system" namespace to be "Ready" ...
E0403 19:02:18.002563 492847 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0403 19:02:18.002574 492847 pod_ready.go:39] duration metric: took 5m28.028966474s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0403 19:02:18.002594 492847 api_server.go:52] waiting for apiserver process to appear ...
I0403 19:02:18.002632 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0403 19:02:18.002712 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0403 19:02:18.046159 492847 cri.go:89] found id: "6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a"
I0403 19:02:18.046184 492847 cri.go:89] found id: "708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 19:02:18.046190 492847 cri.go:89] found id: ""
I0403 19:02:18.046198 492847 logs.go:282] 2 containers: [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a 708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf]
I0403 19:02:18.046261 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.050381 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.054309 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0403 19:02:18.054394 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0403 19:02:18.095726 492847 cri.go:89] found id: "1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0"
I0403 19:02:18.095750 492847 cri.go:89] found id: "d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 19:02:18.095755 492847 cri.go:89] found id: ""
I0403 19:02:18.095763 492847 logs.go:282] 2 containers: [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0 d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2]
I0403 19:02:18.095822 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.099427 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.103135 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0403 19:02:18.103211 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0403 19:02:18.143656 492847 cri.go:89] found id: "390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9"
I0403 19:02:18.143686 492847 cri.go:89] found id: "2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 19:02:18.143693 492847 cri.go:89] found id: ""
I0403 19:02:18.143703 492847 logs.go:282] 2 containers: [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9 2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36]
I0403 19:02:18.143790 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.147571 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.151350 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0403 19:02:18.151460 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0403 19:02:18.190593 492847 cri.go:89] found id: "a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33"
I0403 19:02:18.190618 492847 cri.go:89] found id: "5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 19:02:18.190624 492847 cri.go:89] found id: ""
I0403 19:02:18.190631 492847 logs.go:282] 2 containers: [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33 5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf]
I0403 19:02:18.190693 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.194425 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.198188 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0403 19:02:18.198265 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0403 19:02:18.245589 492847 cri.go:89] found id: "34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c"
I0403 19:02:18.245704 492847 cri.go:89] found id: "1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 19:02:18.245726 492847 cri.go:89] found id: ""
I0403 19:02:18.245741 492847 logs.go:282] 2 containers: [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c 1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8]
I0403 19:02:18.245817 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.249764 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.253223 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0403 19:02:18.253342 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0403 19:02:18.294187 492847 cri.go:89] found id: "d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e"
I0403 19:02:18.294213 492847 cri.go:89] found id: "dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 19:02:18.294219 492847 cri.go:89] found id: ""
I0403 19:02:18.294227 492847 logs.go:282] 2 containers: [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426]
I0403 19:02:18.294287 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.297832 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.301208 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0403 19:02:18.301277 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0403 19:02:18.339338 492847 cri.go:89] found id: "399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7"
I0403 19:02:18.339357 492847 cri.go:89] found id: "a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 19:02:18.339362 492847 cri.go:89] found id: ""
I0403 19:02:18.339369 492847 logs.go:282] 2 containers: [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7 a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4]
I0403 19:02:18.339425 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.343067 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.346263 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0403 19:02:18.346366 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0403 19:02:18.386867 492847 cri.go:89] found id: "982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c"
I0403 19:02:18.386893 492847 cri.go:89] found id: ""
I0403 19:02:18.386902 492847 logs.go:282] 1 containers: [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c]
I0403 19:02:18.386960 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.390505 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0403 19:02:18.390634 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0403 19:02:18.426483 492847 cri.go:89] found id: "110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c"
I0403 19:02:18.426549 492847 cri.go:89] found id: "0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e"
I0403 19:02:18.426559 492847 cri.go:89] found id: ""
I0403 19:02:18.426567 492847 logs.go:282] 2 containers: [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c 0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e]
I0403 19:02:18.426636 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.430078 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:18.433464 492847 logs.go:123] Gathering logs for storage-provisioner [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c] ...
I0403 19:02:18.433488 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c"
I0403 19:02:18.474945 492847 logs.go:123] Gathering logs for etcd [d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2] ...
I0403 19:02:18.474973 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 19:02:18.525243 492847 logs.go:123] Gathering logs for coredns [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9] ...
I0403 19:02:18.525274 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9"
I0403 19:02:18.568513 492847 logs.go:123] Gathering logs for kube-proxy [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c] ...
I0403 19:02:18.568542 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c"
I0403 19:02:18.619394 492847 logs.go:123] Gathering logs for kube-controller-manager [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e] ...
I0403 19:02:18.619424 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e"
I0403 19:02:18.703917 492847 logs.go:123] Gathering logs for kubernetes-dashboard [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c] ...
I0403 19:02:18.703994 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c"
I0403 19:02:18.782639 492847 logs.go:123] Gathering logs for containerd ...
I0403 19:02:18.782718 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0403 19:02:18.848208 492847 logs.go:123] Gathering logs for kube-apiserver [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a] ...
I0403 19:02:18.848291 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a"
I0403 19:02:18.931802 492847 logs.go:123] Gathering logs for kube-scheduler [5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf] ...
I0403 19:02:18.931881 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 19:02:18.994785 492847 logs.go:123] Gathering logs for kube-proxy [1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8] ...
I0403 19:02:18.994868 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 19:02:19.037717 492847 logs.go:123] Gathering logs for kindnet [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7] ...
I0403 19:02:19.037747 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7"
I0403 19:02:19.092893 492847 logs.go:123] Gathering logs for container status ...
I0403 19:02:19.092931 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0403 19:02:19.140985 492847 logs.go:123] Gathering logs for kubelet ...
I0403 19:02:19.141017 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0403 19:02:19.189855 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866658 655 reflector.go:138] object-"kube-system"/"kube-proxy-token-2w58r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2w58r" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190079 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866762 655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190292 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866836 655 reflector.go:138] object-"kube-system"/"coredns-token-dlgd7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-dlgd7" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190516 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866896 655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190734 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866962 655 reflector.go:138] object-"kube-system"/"kindnet-token-5tz9w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5tz9w" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.190963 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.867057 655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-72p5s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-72p5s" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.191185 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.882691 655 reflector.go:138] object-"default"/"default-token-pflvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pflvq" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:19.198869 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:52 old-k8s-version-807851 kubelet[655]: E0403 18:56:52.176260 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.199062 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:52 old-k8s-version-807851 kubelet[655]: E0403 18:56:52.321913 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.203526 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:07 old-k8s-version-807851 kubelet[655]: E0403 18:57:07.491831 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.205222 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:18 old-k8s-version-807851 kubelet[655]: E0403 18:57:18.480376 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.206038 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:21 old-k8s-version-807851 kubelet[655]: E0403 18:57:21.487400 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.206500 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:22 old-k8s-version-807851 kubelet[655]: E0403 18:57:22.492577 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.206829 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:23 old-k8s-version-807851 kubelet[655]: E0403 18:57:23.494384 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.207269 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:25 old-k8s-version-807851 kubelet[655]: E0403 18:57:25.512501 655 pod_workers.go:191] Error syncing pod 11226bcd-ff55-42fd-aee7-efbfee400f0d ("storage-provisioner_kube-system(11226bcd-ff55-42fd-aee7-efbfee400f0d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(11226bcd-ff55-42fd-aee7-efbfee400f0d)"
W0403 19:02:19.211083 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:32 old-k8s-version-807851 kubelet[655]: E0403 18:57:32.491653 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.211693 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:37 old-k8s-version-807851 kubelet[655]: E0403 18:57:37.554635 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.212154 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:42 old-k8s-version-807851 kubelet[655]: E0403 18:57:42.438627 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.212348 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:43 old-k8s-version-807851 kubelet[655]: E0403 18:57:43.477336 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.212716 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:53 old-k8s-version-807851 kubelet[655]: E0403 18:57:53.476964 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.212904 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:57 old-k8s-version-807851 kubelet[655]: E0403 18:57:57.477386 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.213490 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:05 old-k8s-version-807851 kubelet[655]: E0403 18:58:05.636989 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.213829 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:12 old-k8s-version-807851 kubelet[655]: E0403 18:58:12.438564 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.214014 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:12 old-k8s-version-807851 kubelet[655]: E0403 18:58:12.477281 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.216809 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:23 old-k8s-version-807851 kubelet[655]: E0403 18:58:23.488466 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.217148 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:24 old-k8s-version-807851 kubelet[655]: E0403 18:58:24.477118 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.217863 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:35 old-k8s-version-807851 kubelet[655]: E0403 18:58:35.476943 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.218063 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:37 old-k8s-version-807851 kubelet[655]: E0403 18:58:37.477387 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.218657 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:48 old-k8s-version-807851 kubelet[655]: E0403 18:58:48.750238 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.218842 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:50 old-k8s-version-807851 kubelet[655]: E0403 18:58:50.478191 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.219182 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:52 old-k8s-version-807851 kubelet[655]: E0403 18:58:52.438533 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.219370 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:04 old-k8s-version-807851 kubelet[655]: E0403 18:59:04.477211 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.219697 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:05 old-k8s-version-807851 kubelet[655]: E0403 18:59:05.476890 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.220013 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:19 old-k8s-version-807851 kubelet[655]: E0403 18:59:19.477686 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.220209 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:19 old-k8s-version-807851 kubelet[655]: E0403 18:59:19.478097 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.220399 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:31 old-k8s-version-807851 kubelet[655]: E0403 18:59:31.477291 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.220725 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:33 old-k8s-version-807851 kubelet[655]: E0403 18:59:33.476857 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.223179 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:46 old-k8s-version-807851 kubelet[655]: E0403 18:59:46.487501 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:19.223517 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:48 old-k8s-version-807851 kubelet[655]: E0403 18:59:48.477329 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.223702 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:59 old-k8s-version-807851 kubelet[655]: E0403 18:59:59.477203 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.224028 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:01 old-k8s-version-807851 kubelet[655]: E0403 19:00:01.477105 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.224212 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:14 old-k8s-version-807851 kubelet[655]: E0403 19:00:14.477636 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.224809 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:16 old-k8s-version-807851 kubelet[655]: E0403 19:00:16.983011 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.225135 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:22 old-k8s-version-807851 kubelet[655]: E0403 19:00:22.438177 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.225320 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:29 old-k8s-version-807851 kubelet[655]: E0403 19:00:29.477285 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.225657 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:37 old-k8s-version-807851 kubelet[655]: E0403 19:00:37.476888 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.225846 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:43 old-k8s-version-807851 kubelet[655]: E0403 19:00:43.477242 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.226171 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:52 old-k8s-version-807851 kubelet[655]: E0403 19:00:52.481943 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.226356 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:58 old-k8s-version-807851 kubelet[655]: E0403 19:00:58.477202 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.226687 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:04 old-k8s-version-807851 kubelet[655]: E0403 19:01:04.483258 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.226871 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:13 old-k8s-version-807851 kubelet[655]: E0403 19:01:13.477232 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.227197 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:19 old-k8s-version-807851 kubelet[655]: E0403 19:01:19.476987 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.227381 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:24 old-k8s-version-807851 kubelet[655]: E0403 19:01:24.477484 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.227707 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:33 old-k8s-version-807851 kubelet[655]: E0403 19:01:33.477359 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.227892 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:39 old-k8s-version-807851 kubelet[655]: E0403 19:01:39.477333 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.228216 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:47 old-k8s-version-807851 kubelet[655]: E0403 19:01:47.476853 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.228403 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:53 old-k8s-version-807851 kubelet[655]: E0403 19:01:53.477144 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.228728 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: E0403 19:01:58.477092 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.228909 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.229235 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.229421 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0403 19:02:19.229433 492847 logs.go:123] Gathering logs for dmesg ...
I0403 19:02:19.229447 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0403 19:02:19.246484 492847 logs.go:123] Gathering logs for kube-apiserver [708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf] ...
I0403 19:02:19.246514 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 19:02:19.306866 492847 logs.go:123] Gathering logs for kube-scheduler [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33] ...
I0403 19:02:19.306906 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33"
I0403 19:02:19.346628 492847 logs.go:123] Gathering logs for kube-controller-manager [dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426] ...
I0403 19:02:19.346657 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 19:02:19.416616 492847 logs.go:123] Gathering logs for kindnet [a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4] ...
I0403 19:02:19.416648 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 19:02:19.456765 492847 logs.go:123] Gathering logs for storage-provisioner [0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e] ...
I0403 19:02:19.456794 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e"
I0403 19:02:19.501537 492847 logs.go:123] Gathering logs for describe nodes ...
I0403 19:02:19.501564 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0403 19:02:19.661763 492847 logs.go:123] Gathering logs for etcd [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0] ...
I0403 19:02:19.661791 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0"
I0403 19:02:19.725922 492847 logs.go:123] Gathering logs for coredns [2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36] ...
I0403 19:02:19.725950 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 19:02:19.789868 492847 out.go:358] Setting ErrFile to fd 2...
I0403 19:02:19.789893 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0403 19:02:19.789944 492847 out.go:270] X Problems detected in kubelet:
W0403 19:02:19.789957 492847 out.go:270] Apr 03 19:01:53 old-k8s-version-807851 kubelet[655]: E0403 19:01:53.477144 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.789972 492847 out.go:270] Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: E0403 19:01:58.477092 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.789994 492847 out.go:270] Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:19.790005 492847 out.go:270] Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:19.790019 492847 out.go:270] Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0403 19:02:19.790030 492847 out.go:358] Setting ErrFile to fd 2...
I0403 19:02:19.790037 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 19:02:16.486389 502372 pod_ready.go:103] pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:18.984322 502372 pod_ready.go:103] pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace has status "Ready":"False"
I0403 19:02:19.983992 502372 pod_ready.go:93] pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace has status "Ready":"True"
I0403 19:02:19.984024 502372 pod_ready.go:82] duration metric: took 14.005506303s for pod "coredns-668d6bf9bc-vfh27" in "kube-system" namespace to be "Ready" ...
I0403 19:02:19.984036 502372 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-991162" in "kube-system" namespace to be "Ready" ...
I0403 19:02:19.987861 502372 pod_ready.go:93] pod "etcd-embed-certs-991162" in "kube-system" namespace has status "Ready":"True"
I0403 19:02:19.987882 502372 pod_ready.go:82] duration metric: took 3.838303ms for pod "etcd-embed-certs-991162" in "kube-system" namespace to be "Ready" ...
I0403 19:02:19.987894 502372 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-991162" in "kube-system" namespace to be "Ready" ...
I0403 19:02:19.992067 502372 pod_ready.go:93] pod "kube-apiserver-embed-certs-991162" in "kube-system" namespace has status "Ready":"True"
I0403 19:02:19.992089 502372 pod_ready.go:82] duration metric: took 4.188822ms for pod "kube-apiserver-embed-certs-991162" in "kube-system" namespace to be "Ready" ...
I0403 19:02:19.992101 502372 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-991162" in "kube-system" namespace to be "Ready" ...
I0403 19:02:19.995992 502372 pod_ready.go:93] pod "kube-controller-manager-embed-certs-991162" in "kube-system" namespace has status "Ready":"True"
I0403 19:02:19.996013 502372 pod_ready.go:82] duration metric: took 3.904511ms for pod "kube-controller-manager-embed-certs-991162" in "kube-system" namespace to be "Ready" ...
I0403 19:02:19.996025 502372 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dbpsq" in "kube-system" namespace to be "Ready" ...
I0403 19:02:20.000327 502372 pod_ready.go:93] pod "kube-proxy-dbpsq" in "kube-system" namespace has status "Ready":"True"
I0403 19:02:20.000353 502372 pod_ready.go:82] duration metric: took 4.320812ms for pod "kube-proxy-dbpsq" in "kube-system" namespace to be "Ready" ...
I0403 19:02:20.000365 502372 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-991162" in "kube-system" namespace to be "Ready" ...
I0403 19:02:20.382626 502372 pod_ready.go:93] pod "kube-scheduler-embed-certs-991162" in "kube-system" namespace has status "Ready":"True"
I0403 19:02:20.382661 502372 pod_ready.go:82] duration metric: took 382.287056ms for pod "kube-scheduler-embed-certs-991162" in "kube-system" namespace to be "Ready" ...
I0403 19:02:20.382671 502372 pod_ready.go:39] duration metric: took 15.418837697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0403 19:02:20.382688 502372 api_server.go:52] waiting for apiserver process to appear ...
I0403 19:02:20.382765 502372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0403 19:02:20.396780 502372 api_server.go:72] duration metric: took 16.468982539s to wait for apiserver process to appear ...
I0403 19:02:20.396845 502372 api_server.go:88] waiting for apiserver healthz status ...
I0403 19:02:20.396880 502372 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0403 19:02:20.405252 502372 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0403 19:02:20.406528 502372 api_server.go:141] control plane version: v1.32.2
I0403 19:02:20.406598 502372 api_server.go:131] duration metric: took 9.730062ms to wait for apiserver health ...
I0403 19:02:20.406615 502372 system_pods.go:43] waiting for kube-system pods to appear ...
I0403 19:02:20.583397 502372 system_pods.go:59] 8 kube-system pods found
I0403 19:02:20.583434 502372 system_pods.go:61] "coredns-668d6bf9bc-vfh27" [75cec10f-290b-411a-9856-9c70892cd05d] Running
I0403 19:02:20.583442 502372 system_pods.go:61] "etcd-embed-certs-991162" [8fa8dc0a-cecf-430d-a4db-26ef85c85f17] Running
I0403 19:02:20.583446 502372 system_pods.go:61] "kindnet-qhzl5" [8fd085dc-a1ef-4b6a-91c9-683565b6f579] Running
I0403 19:02:20.583450 502372 system_pods.go:61] "kube-apiserver-embed-certs-991162" [4e2e0141-79a8-42ab-965f-4a1c9eb67866] Running
I0403 19:02:20.583455 502372 system_pods.go:61] "kube-controller-manager-embed-certs-991162" [bc184d8b-9f67-4cce-9f1b-9d040e92ea85] Running
I0403 19:02:20.583459 502372 system_pods.go:61] "kube-proxy-dbpsq" [98385ccd-195f-4a0a-82da-3ebfb9a91616] Running
I0403 19:02:20.583463 502372 system_pods.go:61] "kube-scheduler-embed-certs-991162" [7dec09c9-8fe9-4a5c-bb51-30754dc04190] Running
I0403 19:02:20.583467 502372 system_pods.go:61] "storage-provisioner" [ff72e9d9-9a4a-41e6-a0fc-0c9e560aa254] Running
I0403 19:02:20.583481 502372 system_pods.go:74] duration metric: took 176.859107ms to wait for pod list to return data ...
I0403 19:02:20.583489 502372 default_sa.go:34] waiting for default service account to be created ...
I0403 19:02:20.782234 502372 default_sa.go:45] found service account: "default"
I0403 19:02:20.782312 502372 default_sa.go:55] duration metric: took 198.812927ms for default service account to be created ...
I0403 19:02:20.782336 502372 system_pods.go:116] waiting for k8s-apps to be running ...
I0403 19:02:20.982604 502372 system_pods.go:86] 8 kube-system pods found
I0403 19:02:20.982636 502372 system_pods.go:89] "coredns-668d6bf9bc-vfh27" [75cec10f-290b-411a-9856-9c70892cd05d] Running
I0403 19:02:20.982644 502372 system_pods.go:89] "etcd-embed-certs-991162" [8fa8dc0a-cecf-430d-a4db-26ef85c85f17] Running
I0403 19:02:20.982648 502372 system_pods.go:89] "kindnet-qhzl5" [8fd085dc-a1ef-4b6a-91c9-683565b6f579] Running
I0403 19:02:20.982695 502372 system_pods.go:89] "kube-apiserver-embed-certs-991162" [4e2e0141-79a8-42ab-965f-4a1c9eb67866] Running
I0403 19:02:20.982707 502372 system_pods.go:89] "kube-controller-manager-embed-certs-991162" [bc184d8b-9f67-4cce-9f1b-9d040e92ea85] Running
I0403 19:02:20.982713 502372 system_pods.go:89] "kube-proxy-dbpsq" [98385ccd-195f-4a0a-82da-3ebfb9a91616] Running
I0403 19:02:20.982730 502372 system_pods.go:89] "kube-scheduler-embed-certs-991162" [7dec09c9-8fe9-4a5c-bb51-30754dc04190] Running
I0403 19:02:20.982736 502372 system_pods.go:89] "storage-provisioner" [ff72e9d9-9a4a-41e6-a0fc-0c9e560aa254] Running
I0403 19:02:20.982769 502372 system_pods.go:126] duration metric: took 200.404149ms to wait for k8s-apps to be running ...
I0403 19:02:20.982784 502372 system_svc.go:44] waiting for kubelet service to be running ....
I0403 19:02:20.982859 502372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0403 19:02:20.994749 502372 system_svc.go:56] duration metric: took 11.956506ms WaitForService to wait for kubelet
I0403 19:02:20.994791 502372 kubeadm.go:582] duration metric: took 17.066993309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0403 19:02:20.994813 502372 node_conditions.go:102] verifying NodePressure condition ...
I0403 19:02:21.182968 502372 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0403 19:02:21.182998 502372 node_conditions.go:123] node cpu capacity is 2
I0403 19:02:21.183011 502372 node_conditions.go:105] duration metric: took 188.191614ms to run NodePressure ...
I0403 19:02:21.183023 502372 start.go:241] waiting for startup goroutines ...
I0403 19:02:21.183031 502372 start.go:246] waiting for cluster config update ...
I0403 19:02:21.183042 502372 start.go:255] writing updated cluster config ...
I0403 19:02:21.183342 502372 ssh_runner.go:195] Run: rm -f paused
I0403 19:02:21.255848 502372 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
I0403 19:02:21.257413 502372 out.go:177] * Done! kubectl is now configured to use "embed-certs-991162" cluster and "default" namespace by default
I0403 19:02:29.791382 492847 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0403 19:02:29.803974 492847 api_server.go:72] duration metric: took 5m57.251559586s to wait for apiserver process to appear ...
I0403 19:02:29.803998 492847 api_server.go:88] waiting for apiserver healthz status ...
I0403 19:02:29.804036 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0403 19:02:29.804099 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0403 19:02:29.840931 492847 cri.go:89] found id: "6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a"
I0403 19:02:29.840957 492847 cri.go:89] found id: "708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 19:02:29.840962 492847 cri.go:89] found id: ""
I0403 19:02:29.840970 492847 logs.go:282] 2 containers: [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a 708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf]
I0403 19:02:29.841027 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.844658 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.848226 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0403 19:02:29.848320 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0403 19:02:29.885827 492847 cri.go:89] found id: "1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0"
I0403 19:02:29.885852 492847 cri.go:89] found id: "d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 19:02:29.885858 492847 cri.go:89] found id: ""
I0403 19:02:29.885865 492847 logs.go:282] 2 containers: [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0 d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2]
I0403 19:02:29.885924 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.889523 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.893170 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0403 19:02:29.893245 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0403 19:02:29.933618 492847 cri.go:89] found id: "390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9"
I0403 19:02:29.933688 492847 cri.go:89] found id: "2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 19:02:29.933695 492847 cri.go:89] found id: ""
I0403 19:02:29.933703 492847 logs.go:282] 2 containers: [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9 2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36]
I0403 19:02:29.933765 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.937392 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.940763 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0403 19:02:29.940833 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0403 19:02:29.981902 492847 cri.go:89] found id: "a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33"
I0403 19:02:29.981926 492847 cri.go:89] found id: "5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 19:02:29.981932 492847 cri.go:89] found id: ""
I0403 19:02:29.981942 492847 logs.go:282] 2 containers: [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33 5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf]
I0403 19:02:29.982001 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.986102 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:29.989577 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0403 19:02:29.989692 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0403 19:02:30.036713 492847 cri.go:89] found id: "34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c"
I0403 19:02:30.036738 492847 cri.go:89] found id: "1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 19:02:30.036745 492847 cri.go:89] found id: ""
I0403 19:02:30.036753 492847 logs.go:282] 2 containers: [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c 1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8]
I0403 19:02:30.036822 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.041428 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.046110 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0403 19:02:30.046205 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0403 19:02:30.090457 492847 cri.go:89] found id: "d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e"
I0403 19:02:30.090492 492847 cri.go:89] found id: "dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 19:02:30.090498 492847 cri.go:89] found id: ""
I0403 19:02:30.090505 492847 logs.go:282] 2 containers: [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426]
I0403 19:02:30.090569 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.094766 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.098786 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0403 19:02:30.098874 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0403 19:02:30.138440 492847 cri.go:89] found id: "399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7"
I0403 19:02:30.138519 492847 cri.go:89] found id: "a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 19:02:30.138533 492847 cri.go:89] found id: ""
I0403 19:02:30.138542 492847 logs.go:282] 2 containers: [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7 a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4]
I0403 19:02:30.138618 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.142678 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.146521 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0403 19:02:30.146654 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0403 19:02:30.186035 492847 cri.go:89] found id: "110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c"
I0403 19:02:30.186071 492847 cri.go:89] found id: "0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e"
I0403 19:02:30.186077 492847 cri.go:89] found id: ""
I0403 19:02:30.186085 492847 logs.go:282] 2 containers: [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c 0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e]
I0403 19:02:30.186184 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.190175 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.194031 492847 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0403 19:02:30.194127 492847 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0403 19:02:30.234226 492847 cri.go:89] found id: "982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c"
I0403 19:02:30.234251 492847 cri.go:89] found id: ""
I0403 19:02:30.234259 492847 logs.go:282] 1 containers: [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c]
I0403 19:02:30.234338 492847 ssh_runner.go:195] Run: which crictl
I0403 19:02:30.237787 492847 logs.go:123] Gathering logs for coredns [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9] ...
I0403 19:02:30.237817 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9"
I0403 19:02:30.279572 492847 logs.go:123] Gathering logs for kube-scheduler [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33] ...
I0403 19:02:30.279607 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33"
I0403 19:02:30.321779 492847 logs.go:123] Gathering logs for kube-proxy [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c] ...
I0403 19:02:30.321806 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c"
I0403 19:02:30.358470 492847 logs.go:123] Gathering logs for storage-provisioner [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c] ...
I0403 19:02:30.358495 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c"
I0403 19:02:30.398042 492847 logs.go:123] Gathering logs for dmesg ...
I0403 19:02:30.398072 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0403 19:02:30.414911 492847 logs.go:123] Gathering logs for etcd [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0] ...
I0403 19:02:30.414936 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0"
I0403 19:02:30.474445 492847 logs.go:123] Gathering logs for coredns [2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36] ...
I0403 19:02:30.474477 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36"
I0403 19:02:30.565974 492847 logs.go:123] Gathering logs for kindnet [a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4] ...
I0403 19:02:30.566000 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4"
I0403 19:02:30.632109 492847 logs.go:123] Gathering logs for kubelet ...
I0403 19:02:30.632139 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0403 19:02:30.696496 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866658 655 reflector.go:138] object-"kube-system"/"kube-proxy-token-2w58r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2w58r" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.696758 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866762 655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.697009 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866836 655 reflector.go:138] object-"kube-system"/"coredns-token-dlgd7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-dlgd7" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.697235 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866896 655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.697468 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.866962 655 reflector.go:138] object-"kube-system"/"kindnet-token-5tz9w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5tz9w" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.698075 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.867057 655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-72p5s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-72p5s" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.698327 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:49 old-k8s-version-807851 kubelet[655]: E0403 18:56:49.882691 655 reflector.go:138] object-"default"/"default-token-pflvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pflvq" is forbidden: User "system:node:old-k8s-version-807851" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-807851' and this object
W0403 19:02:30.708892 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:52 old-k8s-version-807851 kubelet[655]: E0403 18:56:52.176260 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.713150 492847 logs.go:138] Found kubelet problem: Apr 03 18:56:52 old-k8s-version-807851 kubelet[655]: E0403 18:56:52.321913 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.716941 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:07 old-k8s-version-807851 kubelet[655]: E0403 18:57:07.491831 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.720305 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:18 old-k8s-version-807851 kubelet[655]: E0403 18:57:18.480376 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.721230 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:21 old-k8s-version-807851 kubelet[655]: E0403 18:57:21.487400 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.721779 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:22 old-k8s-version-807851 kubelet[655]: E0403 18:57:22.492577 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.722136 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:23 old-k8s-version-807851 kubelet[655]: E0403 18:57:23.494384 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.722599 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:25 old-k8s-version-807851 kubelet[655]: E0403 18:57:25.512501 655 pod_workers.go:191] Error syncing pod 11226bcd-ff55-42fd-aee7-efbfee400f0d ("storage-provisioner_kube-system(11226bcd-ff55-42fd-aee7-efbfee400f0d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(11226bcd-ff55-42fd-aee7-efbfee400f0d)"
W0403 19:02:30.725754 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:32 old-k8s-version-807851 kubelet[655]: E0403 18:57:32.491653 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.726387 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:37 old-k8s-version-807851 kubelet[655]: E0403 18:57:37.554635 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.726872 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:42 old-k8s-version-807851 kubelet[655]: E0403 18:57:42.438627 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.727082 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:43 old-k8s-version-807851 kubelet[655]: E0403 18:57:43.477336 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.727468 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:53 old-k8s-version-807851 kubelet[655]: E0403 18:57:53.476964 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.727680 492847 logs.go:138] Found kubelet problem: Apr 03 18:57:57 old-k8s-version-807851 kubelet[655]: E0403 18:57:57.477386 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.728372 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:05 old-k8s-version-807851 kubelet[655]: E0403 18:58:05.636989 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.728733 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:12 old-k8s-version-807851 kubelet[655]: E0403 18:58:12.438564 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.728944 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:12 old-k8s-version-807851 kubelet[655]: E0403 18:58:12.477281 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.733381 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:23 old-k8s-version-807851 kubelet[655]: E0403 18:58:23.488466 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.733925 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:24 old-k8s-version-807851 kubelet[655]: E0403 18:58:24.477118 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.734298 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:35 old-k8s-version-807851 kubelet[655]: E0403 18:58:35.476943 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.734484 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:37 old-k8s-version-807851 kubelet[655]: E0403 18:58:37.477387 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.735068 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:48 old-k8s-version-807851 kubelet[655]: E0403 18:58:48.750238 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.735248 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:50 old-k8s-version-807851 kubelet[655]: E0403 18:58:50.478191 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.735570 492847 logs.go:138] Found kubelet problem: Apr 03 18:58:52 old-k8s-version-807851 kubelet[655]: E0403 18:58:52.438533 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.735749 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:04 old-k8s-version-807851 kubelet[655]: E0403 18:59:04.477211 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.736072 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:05 old-k8s-version-807851 kubelet[655]: E0403 18:59:05.476890 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.736389 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:19 old-k8s-version-807851 kubelet[655]: E0403 18:59:19.477686 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.736585 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:19 old-k8s-version-807851 kubelet[655]: E0403 18:59:19.478097 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.736764 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:31 old-k8s-version-807851 kubelet[655]: E0403 18:59:31.477291 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.737319 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:33 old-k8s-version-807851 kubelet[655]: E0403 18:59:33.476857 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.739861 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:46 old-k8s-version-807851 kubelet[655]: E0403 18:59:46.487501 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0403 19:02:30.740221 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:48 old-k8s-version-807851 kubelet[655]: E0403 18:59:48.477329 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.740472 492847 logs.go:138] Found kubelet problem: Apr 03 18:59:59 old-k8s-version-807851 kubelet[655]: E0403 18:59:59.477203 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.740850 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:01 old-k8s-version-807851 kubelet[655]: E0403 19:00:01.477105 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.741065 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:14 old-k8s-version-807851 kubelet[655]: E0403 19:00:14.477636 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.741747 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:16 old-k8s-version-807851 kubelet[655]: E0403 19:00:16.983011 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.742103 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:22 old-k8s-version-807851 kubelet[655]: E0403 19:00:22.438177 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.742317 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:29 old-k8s-version-807851 kubelet[655]: E0403 19:00:29.477285 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.742674 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:37 old-k8s-version-807851 kubelet[655]: E0403 19:00:37.476888 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.742888 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:43 old-k8s-version-807851 kubelet[655]: E0403 19:00:43.477242 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.743241 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:52 old-k8s-version-807851 kubelet[655]: E0403 19:00:52.481943 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.743450 492847 logs.go:138] Found kubelet problem: Apr 03 19:00:58 old-k8s-version-807851 kubelet[655]: E0403 19:00:58.477202 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.743803 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:04 old-k8s-version-807851 kubelet[655]: E0403 19:01:04.483258 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.744013 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:13 old-k8s-version-807851 kubelet[655]: E0403 19:01:13.477232 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.744371 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:19 old-k8s-version-807851 kubelet[655]: E0403 19:01:19.476987 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.744582 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:24 old-k8s-version-807851 kubelet[655]: E0403 19:01:24.477484 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.744934 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:33 old-k8s-version-807851 kubelet[655]: E0403 19:01:33.477359 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.745144 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:39 old-k8s-version-807851 kubelet[655]: E0403 19:01:39.477333 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.745496 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:47 old-k8s-version-807851 kubelet[655]: E0403 19:01:47.476853 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.745723 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:53 old-k8s-version-807851 kubelet[655]: E0403 19:01:53.477144 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.746075 492847 logs.go:138] Found kubelet problem: Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: E0403 19:01:58.477092 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.746284 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.746637 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.746848 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:30.747207 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:27 old-k8s-version-807851 kubelet[655]: E0403 19:02:27.478282 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:30.750104 492847 logs.go:138] Found kubelet problem: Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527317 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I0403 19:02:30.750156 492847 logs.go:123] Gathering logs for describe nodes ...
I0403 19:02:30.750186 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0403 19:02:30.967759 492847 logs.go:123] Gathering logs for kube-apiserver [708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf] ...
I0403 19:02:30.967831 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf"
I0403 19:02:31.073795 492847 logs.go:123] Gathering logs for etcd [d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2] ...
I0403 19:02:31.073859 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2"
I0403 19:02:31.139807 492847 logs.go:123] Gathering logs for kube-controller-manager [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e] ...
I0403 19:02:31.139835 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e"
I0403 19:02:31.200749 492847 logs.go:123] Gathering logs for kube-controller-manager [dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426] ...
I0403 19:02:31.200824 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426"
I0403 19:02:31.310778 492847 logs.go:123] Gathering logs for storage-provisioner [0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e] ...
I0403 19:02:31.310816 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e"
I0403 19:02:31.374588 492847 logs.go:123] Gathering logs for kube-apiserver [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a] ...
I0403 19:02:31.374619 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a"
I0403 19:02:31.466572 492847 logs.go:123] Gathering logs for kube-scheduler [5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf] ...
I0403 19:02:31.466663 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf"
I0403 19:02:31.522302 492847 logs.go:123] Gathering logs for kube-proxy [1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8] ...
I0403 19:02:31.522381 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8"
I0403 19:02:31.585157 492847 logs.go:123] Gathering logs for kindnet [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7] ...
I0403 19:02:31.585232 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7"
I0403 19:02:31.645567 492847 logs.go:123] Gathering logs for kubernetes-dashboard [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c] ...
I0403 19:02:31.645651 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c"
I0403 19:02:31.695352 492847 logs.go:123] Gathering logs for containerd ...
I0403 19:02:31.695433 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0403 19:02:31.777283 492847 logs.go:123] Gathering logs for container status ...
I0403 19:02:31.777361 492847 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0403 19:02:31.864284 492847 out.go:358] Setting ErrFile to fd 2...
I0403 19:02:31.864361 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0403 19:02:31.864436 492847 out.go:270] X Problems detected in kubelet:
W0403 19:02:31.864482 492847 out.go:270] Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:31.864522 492847 out.go:270] Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:31.864562 492847 out.go:270] Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0403 19:02:31.864606 492847 out.go:270] Apr 03 19:02:27 old-k8s-version-807851 kubelet[655]: E0403 19:02:27.478282 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
W0403 19:02:31.864639 492847 out.go:270] Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527317 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I0403 19:02:31.864690 492847 out.go:358] Setting ErrFile to fd 2...
I0403 19:02:31.864711 492847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 19:02:41.869145 492847 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0403 19:02:41.884833 492847 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0403 19:02:41.888027 492847 out.go:201]
W0403 19:02:41.890953 492847 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0403 19:02:41.890999 492847 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0403 19:02:41.891017 492847 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0403 19:02:41.891023 492847 out.go:270] *
W0403 19:02:41.891916 492847 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0403 19:02:41.895864 492847 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
cfb4956289005 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 5689df349e0d9 dashboard-metrics-scraper-8d5bb5db8-67bn4
110f13e725159 ba04bb24b9575 5 minutes ago Running storage-provisioner 2 6232be4eb282c storage-provisioner
982b06663d5a2 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 d19c0d884c153 kubernetes-dashboard-cd95d586-nzlf6
34a6a3672a668 25a5233254979 5 minutes ago Running kube-proxy 1 8d948d74d27e4 kube-proxy-lb5pb
0e2491a7c4c6b ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 6232be4eb282c storage-provisioner
b260efe057188 1611cd07b61d5 5 minutes ago Running busybox 1 1229d7f7203dd busybox
390f10b8e65da db91994f4ee8f 5 minutes ago Running coredns 1 84ec0740c3921 coredns-74ff55c5b-bgscq
399855c8c7aa0 ee75e27fff91c 5 minutes ago Running kindnet-cni 1 f573fb4f3a426 kindnet-mlst6
1bf1c0fe8b5c1 05b738aa1bc63 6 minutes ago Running etcd 1 227c7a063cf27 etcd-old-k8s-version-807851
6fe88a900e8a7 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 f3611f8f92874 kube-apiserver-old-k8s-version-807851
d6c1ce4c8da60 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 ccdeecfc59f93 kube-controller-manager-old-k8s-version-807851
a0903f3834026 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 c6cee924da1a5 kube-scheduler-old-k8s-version-807851
48827dd691b6a 1611cd07b61d5 6 minutes ago Exited busybox 0 b00863e90d9cf busybox
2f401afa89581 db91994f4ee8f 8 minutes ago Exited coredns 0 33660ee12bb91 coredns-74ff55c5b-bgscq
a7daa3e08059b ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 18413403a210a kindnet-mlst6
1c3560cc55d05 25a5233254979 8 minutes ago Exited kube-proxy 0 6482c98ac429b kube-proxy-lb5pb
dc1a1bb7499bd 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 4cb5b01612418 kube-controller-manager-old-k8s-version-807851
5d5ef14942224 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 5ecea1b8c023d kube-scheduler-old-k8s-version-807851
d60728bfd3ef1 05b738aa1bc63 8 minutes ago Exited etcd 0 580c9213cdd25 etcd-old-k8s-version-807851
708e921e10598 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 85e5876bb17f8 kube-apiserver-old-k8s-version-807851
==> containerd <==
Apr 03 18:58:48 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:58:48.583011802Z" level=info msg="received exit event container_id:\"c5b68dabfa1bd76b094a34be837d5e1736d31649ea54eddec5ba7e5710836095\" id:\"c5b68dabfa1bd76b094a34be837d5e1736d31649ea54eddec5ba7e5710836095\" pid:3050 exit_status:255 exited_at:{seconds:1743706728 nanos:581518649}"
Apr 03 18:58:48 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:58:48.583081374Z" level=info msg="StartContainer for \"c5b68dabfa1bd76b094a34be837d5e1736d31649ea54eddec5ba7e5710836095\" returns successfully"
Apr 03 18:58:48 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:58:48.608506782Z" level=info msg="shim disconnected" id=c5b68dabfa1bd76b094a34be837d5e1736d31649ea54eddec5ba7e5710836095 namespace=k8s.io
Apr 03 18:58:48 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:58:48.608688856Z" level=warning msg="cleaning up after shim disconnected" id=c5b68dabfa1bd76b094a34be837d5e1736d31649ea54eddec5ba7e5710836095 namespace=k8s.io
Apr 03 18:58:48 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:58:48.609258241Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 03 18:58:48 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:58:48.752981939Z" level=info msg="RemoveContainer for \"6ce425a70d98bed4eacfa138c877bf0fab76a1509b9616d19995b81a2292c977\""
Apr 03 18:58:48 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:58:48.766982301Z" level=info msg="RemoveContainer for \"6ce425a70d98bed4eacfa138c877bf0fab76a1509b9616d19995b81a2292c977\" returns successfully"
Apr 03 18:59:46 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:59:46.477951571Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 18:59:46 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:59:46.484940760Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Apr 03 18:59:46 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:59:46.487079106Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 03 18:59:46 old-k8s-version-807851 containerd[566]: time="2025-04-03T18:59:46.487106676Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.483351650Z" level=info msg="CreateContainer within sandbox \"5689df349e0d948ba382e396d54f53b232204ba0c52ab67c06d539075545617b\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.501630295Z" level=info msg="CreateContainer within sandbox \"5689df349e0d948ba382e396d54f53b232204ba0c52ab67c06d539075545617b\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c\""
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.504908026Z" level=info msg="StartContainer for \"cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c\""
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.572740357Z" level=info msg="StartContainer for \"cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c\" returns successfully"
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.572897996Z" level=info msg="received exit event container_id:\"cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c\" id:\"cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c\" pid:3304 exit_status:255 exited_at:{seconds:1743706816 nanos:571068396}"
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.598952531Z" level=info msg="shim disconnected" id=cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c namespace=k8s.io
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.599095491Z" level=warning msg="cleaning up after shim disconnected" id=cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c namespace=k8s.io
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.599137707Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.986693888Z" level=info msg="RemoveContainer for \"c5b68dabfa1bd76b094a34be837d5e1736d31649ea54eddec5ba7e5710836095\""
Apr 03 19:00:16 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:00:16.993992193Z" level=info msg="RemoveContainer for \"c5b68dabfa1bd76b094a34be837d5e1736d31649ea54eddec5ba7e5710836095\" returns successfully"
Apr 03 19:02:30 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:02:30.507060121Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:02:30 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:02:30.524881563Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Apr 03 19:02:30 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:02:30.526694679Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 03 19:02:30 old-k8s-version-807851 containerd[566]: time="2025-04-03T19:02:30.526801470Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [2f401afa89581ecff8f372e0fe9cfd135b74794a29186b1c5d84eadc6e893f36] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:53671 - 32493 "HINFO IN 1956971055059266273.1537606355675332317. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015062503s
==> coredns [390f10b8e65da7a909a7abc5ed6eb7d6ef4d625d54e201f048475acafb602fe9] <==
I0403 18:57:24.469708 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-03 18:56:54.469019076 +0000 UTC m=+0.078272696) (total time: 30.000539353s):
Trace[2019727887]: [30.000539353s] [30.000539353s] END
E0403 18:57:24.469839 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0403 18:57:24.470042 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-03 18:56:54.46952716 +0000 UTC m=+0.078780788) (total time: 30.000492419s):
Trace[939984059]: [30.000492419s] [30.000492419s] END
E0403 18:57:24.470159 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0403 18:57:24.474341 1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-03 18:56:54.473934233 +0000 UTC m=+0.083187852) (total time: 30.000385875s):
Trace[1474941318]: [30.000385875s] [30.000385875s] END
E0403 18:57:24.474358 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:44479 - 4605 "HINFO IN 9200664047776171789.7839305818204289981. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026779918s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
==> describe nodes <==
Name: old-k8s-version-807851
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-807851
kubernetes.io/os=linux
minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053
minikube.k8s.io/name=old-k8s-version-807851
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_03T18_53_54_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 03 Apr 2025 18:53:50 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-807851
AcquireTime: <unset>
RenewTime: Thu, 03 Apr 2025 19:02:42 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 03 Apr 2025 19:02:42 +0000 Thu, 03 Apr 2025 18:53:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 03 Apr 2025 19:02:42 +0000 Thu, 03 Apr 2025 18:53:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 03 Apr 2025 19:02:42 +0000 Thu, 03 Apr 2025 18:53:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 03 Apr 2025 19:02:42 +0000 Thu, 03 Apr 2025 18:54:10 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-807851
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: d22a6cf5ef824109a65ae2ca151ee170
System UUID: 432329f3-16bc-4835-94ae-8c6705a20e64
Boot ID: eb4e137f-b7a4-4377-9395-0d320024a45c
Kernel Version: 5.15.0-1081-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.27
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m41s
kube-system coredns-74ff55c5b-bgscq 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m33s
kube-system etcd-old-k8s-version-807851 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m41s
kube-system kindnet-mlst6 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m33s
kube-system kube-apiserver-old-k8s-version-807851 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m41s
kube-system kube-controller-manager-old-k8s-version-807851 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m41s
kube-system kube-proxy-lb5pb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m33s
kube-system kube-scheduler-old-k8s-version-807851 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m41s
kube-system metrics-server-9975d5f86-xfpl4 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m30s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m32s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-67bn4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m33s
kubernetes-dashboard kubernetes-dashboard-cd95d586-nzlf6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m33s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 9m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9m (x5 over 9m) kubelet Node old-k8s-version-807851 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m (x5 over 9m) kubelet Node old-k8s-version-807851 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m (x4 over 9m) kubelet Node old-k8s-version-807851 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9m kubelet Updated Node Allocatable limit across pods
Normal Starting 8m41s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m41s kubelet Node old-k8s-version-807851 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m41s kubelet Node old-k8s-version-807851 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m41s kubelet Node old-k8s-version-807851 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m41s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m33s kubelet Node old-k8s-version-807851 status is now: NodeReady
Normal Starting 8m31s kube-proxy Starting kube-proxy.
Normal Starting 6m3s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m3s (x7 over 6m3s) kubelet Node old-k8s-version-807851 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m3s (x8 over 6m3s) kubelet Node old-k8s-version-807851 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m3s (x8 over 6m3s) kubelet Node old-k8s-version-807851 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m3s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m48s kube-proxy Starting kube-proxy.
==> dmesg <==
[Apr 3 17:09] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
[ +0.324806] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
==> etcd [1bf1c0fe8b5c13deec5d99baf2b283b803f8c80c1245740e4631120bb0b714c0] <==
2025-04-03 18:58:43.422897 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:58:53.422763 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:59:03.422866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:59:13.422864 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:59:23.422878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:59:33.422962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:59:43.422910 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:59:53.422787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:00:03.422884 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:00:13.422828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:00:23.422925 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:00:33.422989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:00:43.422878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:00:53.423284 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:01:03.422880 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:01:13.422892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:01:23.422887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:01:33.422746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:01:43.422761 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:01:53.422992 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:02:03.422906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:02:13.422902 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:02:23.422969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:02:33.422883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 19:02:43.422849 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [d60728bfd3ef1ade3e667ae6739692cc69300d8a8c5afc48fa0ddc4e78462bd2] <==
raft2025/04/03 18:53:44 INFO: ea7e25599daad906 became candidate at term 2
raft2025/04/03 18:53:44 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/04/03 18:53:44 INFO: ea7e25599daad906 became leader at term 2
raft2025/04/03 18:53:44 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-04-03 18:53:44.790288 I | etcdserver: setting up the initial cluster version to 3.4
2025-04-03 18:53:44.796130 N | etcdserver/membership: set the initial cluster version to 3.4
2025-04-03 18:53:44.796356 I | etcdserver/api: enabled capabilities for version 3.4
2025-04-03 18:53:44.796466 I | etcdserver: published {Name:old-k8s-version-807851 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-04-03 18:53:44.796721 I | embed: ready to serve client requests
2025-04-03 18:53:44.800515 I | embed: serving client requests on 192.168.76.2:2379
2025-04-03 18:53:44.807209 I | embed: ready to serve client requests
2025-04-03 18:53:44.810043 I | embed: serving client requests on 127.0.0.1:2379
2025-04-03 18:54:07.674746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:54:17.367217 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:54:27.367179 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:54:37.367227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:54:47.367174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:54:57.367150 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:55:07.367182 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:55:17.367114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:55:27.367320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:55:37.367372 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:55:47.367291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:55:57.367131 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-03 18:56:07.367212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
19:02:43 up 2:45, 0 users, load average: 2.05, 2.45, 2.75
Linux old-k8s-version-807851 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [399855c8c7aa0de89e52e3121e9296667472d8129229a14b0e33c2706ecf32b7] <==
I0403 19:00:34.460472 1 main.go:301] handling current node
I0403 19:00:44.464047 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:00:44.464086 1 main.go:301] handling current node
I0403 19:00:54.456352 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:00:54.456400 1 main.go:301] handling current node
I0403 19:01:04.462827 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:01:04.462864 1 main.go:301] handling current node
I0403 19:01:14.465732 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:01:14.465782 1 main.go:301] handling current node
I0403 19:01:24.465765 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:01:24.465800 1 main.go:301] handling current node
I0403 19:01:34.462552 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:01:34.462593 1 main.go:301] handling current node
I0403 19:01:44.461748 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:01:44.461868 1 main.go:301] handling current node
I0403 19:01:54.462074 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:01:54.462111 1 main.go:301] handling current node
I0403 19:02:04.462970 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:02:04.463008 1 main.go:301] handling current node
I0403 19:02:14.465754 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:02:14.465792 1 main.go:301] handling current node
I0403 19:02:24.465718 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:02:24.465810 1 main.go:301] handling current node
I0403 19:02:34.463086 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 19:02:34.463181 1 main.go:301] handling current node
==> kindnet [a7daa3e08059b82cce07ecfd3d8b927a8e8cbca240b78693c0304daf66829cb4] <==
I0403 18:54:14.527252 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0403 18:54:14.527280 1 metrics.go:61] Registering metrics
I0403 18:54:14.527529 1 controller.go:401] Syncing nftables rules
I0403 18:54:24.330291 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:54:24.330341 1 main.go:301] handling current node
I0403 18:54:34.327934 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:54:34.327974 1 main.go:301] handling current node
I0403 18:54:44.334166 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:54:44.334205 1 main.go:301] handling current node
I0403 18:54:54.335496 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:54:54.335531 1 main.go:301] handling current node
I0403 18:55:04.335447 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:55:04.335482 1 main.go:301] handling current node
I0403 18:55:14.326495 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:55:14.326702 1 main.go:301] handling current node
I0403 18:55:24.327391 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:55:24.327600 1 main.go:301] handling current node
I0403 18:55:34.329715 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:55:34.330028 1 main.go:301] handling current node
I0403 18:55:44.334691 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:55:44.334732 1 main.go:301] handling current node
I0403 18:55:54.333726 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:55:54.333762 1 main.go:301] handling current node
I0403 18:56:04.327050 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0403 18:56:04.327083 1 main.go:301] handling current node
==> kube-apiserver [6fe88a900e8a71de95d253f962f9583843d6eded60e0f18c5b8b26e562539f3a] <==
I0403 18:59:28.870130 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 18:59:28.870139 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0403 18:59:54.354498 1 handler_proxy.go:102] no RequestInfo found in the context
E0403 18:59:54.354718 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0403 18:59:54.354807 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0403 19:00:07.212207 1 client.go:360] parsed scheme: "passthrough"
I0403 19:00:07.212470 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 19:00:07.212602 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0403 19:00:43.822010 1 client.go:360] parsed scheme: "passthrough"
I0403 19:00:43.822055 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 19:00:43.822064 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0403 19:01:17.005193 1 client.go:360] parsed scheme: "passthrough"
I0403 19:01:17.005240 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 19:01:17.005250 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0403 19:01:51.196733 1 handler_proxy.go:102] no RequestInfo found in the context
E0403 19:01:51.197060 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0403 19:01:51.197187 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0403 19:01:54.920557 1 client.go:360] parsed scheme: "passthrough"
I0403 19:01:54.920602 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 19:01:54.920610 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0403 19:02:34.792967 1 client.go:360] parsed scheme: "passthrough"
I0403 19:02:34.793009 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 19:02:34.793018 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [708e921e1059836f9392592d14d347215e68b5804c47b9ef37e15eee871ec0cf] <==
I0403 18:53:51.488949 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0403 18:53:51.504133 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0403 18:53:51.513394 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0403 18:53:51.513418 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0403 18:53:52.011587 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0403 18:53:52.054330 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0403 18:53:52.114069 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0403 18:53:52.115411 1 controller.go:606] quota admission added evaluator for: endpoints
I0403 18:53:52.121704 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0403 18:53:53.197159 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0403 18:53:53.806749 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0403 18:53:53.898674 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0403 18:54:02.305266 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0403 18:54:10.532584 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0403 18:54:10.594292 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0403 18:54:22.621628 1 client.go:360] parsed scheme: "passthrough"
I0403 18:54:22.621697 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 18:54:22.621707 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0403 18:54:58.731042 1 client.go:360] parsed scheme: "passthrough"
I0403 18:54:58.731087 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 18:54:58.731096 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0403 18:55:36.403212 1 client.go:360] parsed scheme: "passthrough"
I0403 18:55:36.403257 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0403 18:55:36.403267 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0403 18:56:11.650166 1 upgradeaware.go:387] Error proxying data from backend to client: write tcp 192.168.76.2:8443->192.168.76.1:57462: write: broken pipe
==> kube-controller-manager [d6c1ce4c8da6014409134378b89e30ec643bf52b03b47758de1f1e5bcdd2403e] <==
W0403 18:58:16.858282 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0403 18:58:42.173989 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0403 18:58:48.508878 1 request.go:655] Throttling request took 1.048466898s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W0403 18:58:49.360331 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0403 18:59:12.676026 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0403 18:59:21.011301 1 request.go:655] Throttling request took 1.048712559s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0403 18:59:21.862385 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0403 18:59:43.178046 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0403 18:59:53.512794 1 request.go:655] Throttling request took 1.048238664s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0403 18:59:54.364339 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0403 19:00:13.679752 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0403 19:00:26.014878 1 request.go:655] Throttling request took 1.048403743s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0403 19:00:26.866423 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0403 19:00:44.181546 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0403 19:00:58.516998 1 request.go:655] Throttling request took 1.048499461s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0403 19:00:59.368565 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0403 19:01:14.683283 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0403 19:01:31.019551 1 request.go:655] Throttling request took 1.048442795s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0403 19:01:31.870932 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0403 19:01:45.186046 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0403 19:02:03.521346 1 request.go:655] Throttling request took 1.016083186s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0403 19:02:04.374083 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0403 19:02:15.688489 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0403 19:02:36.024723 1 request.go:655] Throttling request took 1.0484443s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0403 19:02:36.876057 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [dc1a1bb7499bdde377a6867ef7a05c6823bcfaf801a2d14ff44f0be6946d6426] <==
I0403 18:54:10.554960 1 shared_informer.go:247] Caches are synced for persistent volume
I0403 18:54:10.566169 1 range_allocator.go:373] Set node old-k8s-version-807851 PodCIDR to [10.244.0.0/24]
I0403 18:54:10.581413 1 shared_informer.go:247] Caches are synced for deployment
I0403 18:54:10.603071 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lb5pb"
I0403 18:54:10.658117 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0403 18:54:10.677950 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-b4j6x"
I0403 18:54:10.679900 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0403 18:54:10.734396 1 shared_informer.go:247] Caches are synced for disruption
I0403 18:54:10.734418 1 disruption.go:339] Sending events to api server.
I0403 18:54:10.740463 1 shared_informer.go:247] Caches are synced for resource quota
I0403 18:54:10.740566 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mlst6"
I0403 18:54:10.740588 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-bgscq"
I0403 18:54:10.781477 1 shared_informer.go:247] Caches are synced for resource quota
I0403 18:54:10.896356 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E0403 18:54:10.921259 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0403 18:54:11.178756 1 shared_informer.go:247] Caches are synced for garbage collector
I0403 18:54:11.178786 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0403 18:54:11.196565 1 shared_informer.go:247] Caches are synced for garbage collector
I0403 18:54:12.016843 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0403 18:54:12.075547 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-b4j6x"
I0403 18:54:15.482077 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0403 18:56:12.393742 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0403 18:56:12.503861 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0403 18:56:12.515421 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0403 18:56:12.548387 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [1c3560cc55d0592ed02b6cb0c2f83e671d042efe23f092987978bd49348c24c8] <==
I0403 18:54:12.439536 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0403 18:54:12.439641 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0403 18:54:12.485335 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0403 18:54:12.485459 1 server_others.go:185] Using iptables Proxier.
I0403 18:54:12.485954 1 server.go:650] Version: v1.20.0
I0403 18:54:12.487131 1 config.go:315] Starting service config controller
I0403 18:54:12.487140 1 shared_informer.go:240] Waiting for caches to sync for service config
I0403 18:54:12.487203 1 config.go:224] Starting endpoint slice config controller
I0403 18:54:12.488326 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0403 18:54:12.589366 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0403 18:54:12.589424 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [34a6a3672a668480addd904b75370e289fe0b9dacc618aaf601371ad01bab90c] <==
I0403 18:56:54.955130 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0403 18:56:54.955417 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0403 18:56:55.001044 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0403 18:56:55.001387 1 server_others.go:185] Using iptables Proxier.
I0403 18:56:55.001842 1 server.go:650] Version: v1.20.0
I0403 18:56:55.002822 1 config.go:315] Starting service config controller
I0403 18:56:55.003017 1 shared_informer.go:240] Waiting for caches to sync for service config
I0403 18:56:55.003146 1 config.go:224] Starting endpoint slice config controller
I0403 18:56:55.003229 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0403 18:56:55.103264 1 shared_informer.go:247] Caches are synced for service config
I0403 18:56:55.103504 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [5d5ef149422241aab6c0caa220bb68a784a040820d94cb28635e3dfb2da0abdf] <==
W0403 18:53:50.685443 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0403 18:53:50.685573 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0403 18:53:50.685669 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0403 18:53:50.755399 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0403 18:53:50.757662 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0403 18:53:50.757846 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0403 18:53:50.758047 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0403 18:53:50.766475 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0403 18:53:50.767971 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0403 18:53:50.768234 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0403 18:53:50.768440 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0403 18:53:50.768649 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0403 18:53:50.768959 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0403 18:53:50.769244 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0403 18:53:50.769482 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0403 18:53:50.771106 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0403 18:53:50.772460 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0403 18:53:50.772709 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0403 18:53:50.773735 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0403 18:53:51.613970 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0403 18:53:51.660839 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0403 18:53:51.694072 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0403 18:53:51.709627 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0403 18:53:51.843381 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0403 18:53:52.459801 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [a0903f3834026afab3374d57c5cf6a1b95a62b14668d4bdf9e9b3ee531153c33] <==
I0403 18:56:44.568133 1 serving.go:331] Generated self-signed cert in-memory
W0403 18:56:49.981705 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0403 18:56:49.981738 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0403 18:56:49.981747 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0403 18:56:49.981753 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0403 18:56:50.290330 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0403 18:56:50.290716 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0403 18:56:50.290811 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0403 18:56:50.290902 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0403 18:56:50.598552 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 03 19:01:13 old-k8s-version-807851 kubelet[655]: E0403 19:01:13.477232 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:01:19 old-k8s-version-807851 kubelet[655]: I0403 19:01:19.476574 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c
Apr 03 19:01:19 old-k8s-version-807851 kubelet[655]: E0403 19:01:19.476987 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:01:24 old-k8s-version-807851 kubelet[655]: E0403 19:01:24.477484 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:01:33 old-k8s-version-807851 kubelet[655]: I0403 19:01:33.476547 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c
Apr 03 19:01:33 old-k8s-version-807851 kubelet[655]: E0403 19:01:33.477359 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:01:39 old-k8s-version-807851 kubelet[655]: E0403 19:01:39.477333 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:01:47 old-k8s-version-807851 kubelet[655]: I0403 19:01:47.476510 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c
Apr 03 19:01:47 old-k8s-version-807851 kubelet[655]: E0403 19:01:47.476853 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:01:53 old-k8s-version-807851 kubelet[655]: E0403 19:01:53.477144 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: I0403 19:01:58.476700 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c
Apr 03 19:01:58 old-k8s-version-807851 kubelet[655]: E0403 19:01:58.477092 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:02:04 old-k8s-version-807851 kubelet[655]: E0403 19:02:04.486011 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: I0403 19:02:13.476630 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c
Apr 03 19:02:13 old-k8s-version-807851 kubelet[655]: E0403 19:02:13.476993 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:02:17 old-k8s-version-807851 kubelet[655]: E0403 19:02:17.477249 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 03 19:02:27 old-k8s-version-807851 kubelet[655]: I0403 19:02:27.476490 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c
Apr 03 19:02:27 old-k8s-version-807851 kubelet[655]: E0403 19:02:27.478282 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527096 655 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527144 655 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527280 655 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-ww5xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-xfpl4_kube-system(e395c87
2-8141-4a32-a772-c988db1fc20f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 03 19:02:30 old-k8s-version-807851 kubelet[655]: E0403 19:02:30.527317 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 03 19:02:38 old-k8s-version-807851 kubelet[655]: I0403 19:02:38.476667 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfb4956289005c7626398f1a0e0d8f21936cd19d536393674a231b68d865d62c
Apr 03 19:02:38 old-k8s-version-807851 kubelet[655]: E0403 19:02:38.477610 655 pod_workers.go:191] Error syncing pod 37e87c0f-e530-4ba7-834d-a890b3035993 ("dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-67bn4_kubernetes-dashboard(37e87c0f-e530-4ba7-834d-a890b3035993)"
Apr 03 19:02:42 old-k8s-version-807851 kubelet[655]: E0403 19:02:42.486657 655 pod_workers.go:191] Error syncing pod e395c872-8141-4a32-a772-c988db1fc20f ("metrics-server-9975d5f86-xfpl4_kube-system(e395c872-8141-4a32-a772-c988db1fc20f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [982b06663d5a2fb5a5066e3f3f5486a7e0dfb589a0a725d4ddc420fb308ad36c] <==
2025/04/03 18:57:14 Starting overwatch
2025/04/03 18:57:14 Using namespace: kubernetes-dashboard
2025/04/03 18:57:14 Using in-cluster config to connect to apiserver
2025/04/03 18:57:14 Using secret token for csrf signing
2025/04/03 18:57:14 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/04/03 18:57:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/04/03 18:57:15 Successful initial request to the apiserver, version: v1.20.0
2025/04/03 18:57:15 Generating JWE encryption key
2025/04/03 18:57:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/04/03 18:57:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/04/03 18:57:16 Initializing JWE encryption key from synchronized object
2025/04/03 18:57:16 Creating in-cluster Sidecar client
2025/04/03 18:57:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 18:57:16 Serving insecurely on HTTP port: 9090
2025/04/03 18:57:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 18:58:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 18:58:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 18:59:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 18:59:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 19:00:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 19:00:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 19:01:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 19:01:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/03 19:02:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [0e2491a7c4c6b20af3e0399d98d979627e5ff8eec843196546e24f996652a16e] <==
I0403 18:56:54.623497 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0403 18:57:24.625291 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [110f13e725159ed993fc9664e2c1c9f2d57f0396dbc07d4b582a7760f9ca371c] <==
I0403 18:57:38.585792 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0403 18:57:38.621736 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0403 18:57:38.621789 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0403 18:57:56.098504 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0403 18:57:56.098887 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-807851_96fb28fb-a28a-462f-a521-c036171783bb!
I0403 18:57:56.099651 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2860c773-da83-459b-ac83-d7bc463857d7", APIVersion:"v1", ResourceVersion:"869", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-807851_96fb28fb-a28a-462f-a521-c036171783bb became leader
I0403 18:57:56.199283 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-807851_96fb28fb-a28a-462f-a521-c036171783bb!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-807851 -n old-k8s-version-807851
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-807851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-xfpl4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-807851 describe pod metrics-server-9975d5f86-xfpl4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-807851 describe pod metrics-server-9975d5f86-xfpl4: exit status 1 (132.238736ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-xfpl4" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-807851 describe pod metrics-server-9975d5f86-xfpl4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.75s)