=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-145659 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E0120 17:46:43.279884 7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:47:01.700159 7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-145659 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m20.326132357s)
-- stdout --
* [old-k8s-version-145659] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20109
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
* Using the docker driver based on existing profile
* Starting "old-k8s-version-145659" primary control-plane node in "old-k8s-version-145659" cluster
* Pulling base image v0.0.46 ...
* Restarting existing docker container for "old-k8s-version-145659" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
* Verifying Kubernetes components...
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-145659 addons enable metrics-server
* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
-- /stdout --
** stderr **
I0120 17:46:05.924027 216535 out.go:345] Setting OutFile to fd 1 ...
I0120 17:46:05.924133 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:46:05.924138 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:46:05.924143 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:46:05.927074 216535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
I0120 17:46:05.927700 216535 out.go:352] Setting JSON to false
I0120 17:46:05.929358 216535 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5310,"bootTime":1737389856,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0120 17:46:05.929494 216535 start.go:139] virtualization:
I0120 17:46:05.934415 216535 out.go:177] * [old-k8s-version-145659] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0120 17:46:05.937769 216535 out.go:177] - MINIKUBE_LOCATION=20109
I0120 17:46:05.937939 216535 notify.go:220] Checking for updates...
I0120 17:46:05.943752 216535 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 17:46:05.946745 216535 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
I0120 17:46:05.949651 216535 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
I0120 17:46:05.953188 216535 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0120 17:46:05.956057 216535 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 17:46:05.959519 216535 config.go:182] Loaded profile config "old-k8s-version-145659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 17:46:05.962887 216535 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
I0120 17:46:05.965681 216535 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 17:46:06.002743 216535 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
I0120 17:46:06.002881 216535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 17:46:06.093668 216535 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 17:46:06.084132487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 17:46:06.093786 216535 docker.go:318] overlay module found
I0120 17:46:06.096967 216535 out.go:177] * Using the docker driver based on existing profile
I0120 17:46:06.099823 216535 start.go:297] selected driver: docker
I0120 17:46:06.099846 216535 start.go:901] validating driver "docker" against &{Name:old-k8s-version-145659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 17:46:06.099966 216535 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 17:46:06.100696 216535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 17:46:06.206608 216535 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 17:46:06.19724018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 17:46:06.207017 216535 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 17:46:06.207037 216535 cni.go:84] Creating CNI manager for ""
I0120 17:46:06.207072 216535 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 17:46:06.207105 216535 start.go:340] cluster config:
{Name:old-k8s-version-145659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 17:46:06.210546 216535 out.go:177] * Starting "old-k8s-version-145659" primary control-plane node in "old-k8s-version-145659" cluster
I0120 17:46:06.213362 216535 cache.go:121] Beginning downloading kic base image for docker with containerd
I0120 17:46:06.216265 216535 out.go:177] * Pulling base image v0.0.46 ...
I0120 17:46:06.219148 216535 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 17:46:06.219203 216535 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0120 17:46:06.219212 216535 cache.go:56] Caching tarball of preloaded images
I0120 17:46:06.219305 216535 preload.go:172] Found /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0120 17:46:06.219313 216535 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0120 17:46:06.219460 216535 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/config.json ...
I0120 17:46:06.219694 216535 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0120 17:46:06.245850 216535 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0120 17:46:06.245871 216535 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0120 17:46:06.245884 216535 cache.go:227] Successfully downloaded all kic artifacts
I0120 17:46:06.245915 216535 start.go:360] acquireMachinesLock for old-k8s-version-145659: {Name:mkc018e598a91196e1dc19a35c434f89ff9fd55d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 17:46:06.245970 216535 start.go:364] duration metric: took 35.25µs to acquireMachinesLock for "old-k8s-version-145659"
I0120 17:46:06.245988 216535 start.go:96] Skipping create...Using existing machine configuration
I0120 17:46:06.245993 216535 fix.go:54] fixHost starting:
I0120 17:46:06.246246 216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
I0120 17:46:06.275678 216535 fix.go:112] recreateIfNeeded on old-k8s-version-145659: state=Stopped err=<nil>
W0120 17:46:06.275704 216535 fix.go:138] unexpected machine state, will restart: <nil>
I0120 17:46:06.279302 216535 out.go:177] * Restarting existing docker container for "old-k8s-version-145659" ...
I0120 17:46:06.283509 216535 cli_runner.go:164] Run: docker start old-k8s-version-145659
I0120 17:46:06.662660 216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
I0120 17:46:06.687634 216535 kic.go:430] container "old-k8s-version-145659" state is running.
I0120 17:46:06.688026 216535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145659
I0120 17:46:06.718777 216535 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/config.json ...
I0120 17:46:06.718997 216535 machine.go:93] provisionDockerMachine start ...
I0120 17:46:06.719060 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:06.754965 216535 main.go:141] libmachine: Using SSH client type: native
I0120 17:46:06.755236 216535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I0120 17:46:06.755252 216535 main.go:141] libmachine: About to run SSH command:
hostname
I0120 17:46:06.755979 216535 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0120 17:46:09.879374 216535 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-145659
I0120 17:46:09.879396 216535 ubuntu.go:169] provisioning hostname "old-k8s-version-145659"
I0120 17:46:09.879468 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:09.923478 216535 main.go:141] libmachine: Using SSH client type: native
I0120 17:46:09.923728 216535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I0120 17:46:09.923739 216535 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-145659 && echo "old-k8s-version-145659" | sudo tee /etc/hostname
I0120 17:46:10.068627 216535 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-145659
I0120 17:46:10.068717 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:10.092361 216535 main.go:141] libmachine: Using SSH client type: native
I0120 17:46:10.092623 216535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I0120 17:46:10.092647 216535 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-145659' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-145659/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-145659' | sudo tee -a /etc/hosts;
fi
fi
I0120 17:46:10.220671 216535 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 17:46:10.220708 216535 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2518/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2518/.minikube}
I0120 17:46:10.220739 216535 ubuntu.go:177] setting up certificates
I0120 17:46:10.220749 216535 provision.go:84] configureAuth start
I0120 17:46:10.220837 216535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145659
I0120 17:46:10.247201 216535 provision.go:143] copyHostCerts
I0120 17:46:10.247292 216535 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem, removing ...
I0120 17:46:10.247306 216535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem
I0120 17:46:10.247404 216535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem (1679 bytes)
I0120 17:46:10.247548 216535 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem, removing ...
I0120 17:46:10.247559 216535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem
I0120 17:46:10.247591 216535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem (1082 bytes)
I0120 17:46:10.247685 216535 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem, removing ...
I0120 17:46:10.247696 216535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem
I0120 17:46:10.247731 216535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem (1123 bytes)
I0120 17:46:10.247806 216535 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-145659 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-145659]
I0120 17:46:11.195154 216535 provision.go:177] copyRemoteCerts
I0120 17:46:11.195332 216535 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 17:46:11.205365 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:11.226258 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:11.317569 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0120 17:46:11.343192 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0120 17:46:11.369292 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0120 17:46:11.395976 216535 provision.go:87] duration metric: took 1.175208097s to configureAuth
I0120 17:46:11.396054 216535 ubuntu.go:193] setting minikube options for container-runtime
I0120 17:46:11.396300 216535 config.go:182] Loaded profile config "old-k8s-version-145659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 17:46:11.396329 216535 machine.go:96] duration metric: took 4.67731771s to provisionDockerMachine
I0120 17:46:11.396364 216535 start.go:293] postStartSetup for "old-k8s-version-145659" (driver="docker")
I0120 17:46:11.396393 216535 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 17:46:11.396476 216535 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 17:46:11.396543 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:11.418332 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:11.509234 216535 ssh_runner.go:195] Run: cat /etc/os-release
I0120 17:46:11.513156 216535 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 17:46:11.513190 216535 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 17:46:11.513202 216535 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 17:46:11.513208 216535 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0120 17:46:11.513218 216535 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2518/.minikube/addons for local assets ...
I0120 17:46:11.513277 216535 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2518/.minikube/files for local assets ...
I0120 17:46:11.513353 216535 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem -> 78442.pem in /etc/ssl/certs
I0120 17:46:11.513451 216535 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 17:46:11.522719 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem --> /etc/ssl/certs/78442.pem (1708 bytes)
I0120 17:46:11.549248 216535 start.go:296] duration metric: took 152.851001ms for postStartSetup
I0120 17:46:11.549396 216535 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 17:46:11.549456 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:11.569060 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:11.656223 216535 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0120 17:46:11.666597 216535 fix.go:56] duration metric: took 5.42059619s for fixHost
I0120 17:46:11.666633 216535 start.go:83] releasing machines lock for "old-k8s-version-145659", held for 5.420653897s
I0120 17:46:11.666702 216535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145659
I0120 17:46:11.692738 216535 ssh_runner.go:195] Run: cat /version.json
I0120 17:46:11.692789 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:11.692844 216535 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 17:46:11.692925 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:11.723463 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:11.731314 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:11.831454 216535 ssh_runner.go:195] Run: systemctl --version
I0120 17:46:11.976016 216535 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0120 17:46:11.980625 216535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0120 17:46:12.004291 216535 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0120 17:46:12.004452 216535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 17:46:12.015676 216535 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0120 17:46:12.015741 216535 start.go:495] detecting cgroup driver to use...
I0120 17:46:12.015788 216535 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0120 17:46:12.015865 216535 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 17:46:12.032837 216535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 17:46:12.047395 216535 docker.go:217] disabling cri-docker service (if available) ...
I0120 17:46:12.047542 216535 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 17:46:12.063598 216535 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 17:46:12.077573 216535 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 17:46:12.189191 216535 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 17:46:12.299654 216535 docker.go:233] disabling docker service ...
I0120 17:46:12.299769 216535 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 17:46:12.314439 216535 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 17:46:12.326906 216535 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 17:46:12.434394 216535 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 17:46:12.553091 216535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 17:46:12.568529 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 17:46:12.586525 216535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0120 17:46:12.596704 216535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 17:46:12.606713 216535 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 17:46:12.606832 216535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 17:46:12.616823 216535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 17:46:12.626682 216535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 17:46:12.636497 216535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 17:46:12.646523 216535 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 17:46:12.656022 216535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 17:46:12.666080 216535 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 17:46:12.675830 216535 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 17:46:12.684732 216535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 17:46:12.791875 216535 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 17:46:12.998071 216535 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 17:46:12.998189 216535 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 17:46:13.002732 216535 start.go:563] Will wait 60s for crictl version
I0120 17:46:13.002876 216535 ssh_runner.go:195] Run: which crictl
I0120 17:46:13.007296 216535 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 17:46:13.066060 216535 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0120 17:46:13.066173 216535 ssh_runner.go:195] Run: containerd --version
I0120 17:46:13.087981 216535 ssh_runner.go:195] Run: containerd --version
I0120 17:46:13.115409 216535 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
I0120 17:46:13.118792 216535 cli_runner.go:164] Run: docker network inspect old-k8s-version-145659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 17:46:13.141085 216535 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0120 17:46:13.147849 216535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 17:46:13.160422 216535 kubeadm.go:883] updating cluster {Name:old-k8s-version-145659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 17:46:13.160554 216535 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 17:46:13.160612 216535 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 17:46:13.214208 216535 containerd.go:627] all images are preloaded for containerd runtime.
I0120 17:46:13.214232 216535 containerd.go:534] Images already preloaded, skipping extraction
I0120 17:46:13.214289 216535 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 17:46:13.261217 216535 containerd.go:627] all images are preloaded for containerd runtime.
I0120 17:46:13.261241 216535 cache_images.go:84] Images are preloaded, skipping loading
I0120 17:46:13.261249 216535 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0120 17:46:13.261359 216535 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-145659 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 17:46:13.261430 216535 ssh_runner.go:195] Run: sudo crictl info
I0120 17:46:13.313120 216535 cni.go:84] Creating CNI manager for ""
I0120 17:46:13.313148 216535 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 17:46:13.313159 216535 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 17:46:13.313179 216535 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-145659 NodeName:old-k8s-version-145659 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0120 17:46:13.313307 216535 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-145659"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 17:46:13.313377 216535 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0120 17:46:13.323129 216535 binaries.go:44] Found k8s binaries, skipping transfer
I0120 17:46:13.323196 216535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 17:46:13.332349 216535 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0120 17:46:13.351092 216535 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 17:46:13.370101 216535 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0120 17:46:13.389638 216535 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0120 17:46:13.393251 216535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 17:46:13.405001 216535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 17:46:13.518817 216535 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 17:46:13.535456 216535 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659 for IP: 192.168.76.2
I0120 17:46:13.535481 216535 certs.go:194] generating shared ca certs ...
I0120 17:46:13.535499 216535 certs.go:226] acquiring lock for ca certs: {Name:mk409d9cbe30328f0e66b0d712629bd4b02b995b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 17:46:13.535636 216535 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2518/.minikube/ca.key
I0120 17:46:13.535683 216535 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.key
I0120 17:46:13.535696 216535 certs.go:256] generating profile certs ...
I0120 17:46:13.535789 216535 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.key
I0120 17:46:13.535859 216535 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/apiserver.key.4fd2295c
I0120 17:46:13.535906 216535 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/proxy-client.key
I0120 17:46:13.536030 216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844.pem (1338 bytes)
W0120 17:46:13.536064 216535 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844_empty.pem, impossibly tiny 0 bytes
I0120 17:46:13.536077 216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem (1679 bytes)
I0120 17:46:13.536101 216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem (1082 bytes)
I0120 17:46:13.536127 216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem (1123 bytes)
I0120 17:46:13.536153 216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem (1679 bytes)
I0120 17:46:13.536197 216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem (1708 bytes)
I0120 17:46:13.536808 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 17:46:13.564864 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0120 17:46:13.594949 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 17:46:13.621901 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 17:46:13.649436 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0120 17:46:13.676838 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0120 17:46:13.702488 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 17:46:13.777588 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 17:46:13.835573 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 17:46:13.861877 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844.pem --> /usr/share/ca-certificates/7844.pem (1338 bytes)
I0120 17:46:13.889994 216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem --> /usr/share/ca-certificates/78442.pem (1708 bytes)
I0120 17:46:13.925019 216535 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 17:46:13.952492 216535 ssh_runner.go:195] Run: openssl version
I0120 17:46:13.958045 216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 17:46:13.982563 216535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 17:46:13.990668 216535 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 16:58 /usr/share/ca-certificates/minikubeCA.pem
I0120 17:46:13.990735 216535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 17:46:13.998119 216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 17:46:14.007920 216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7844.pem && ln -fs /usr/share/ca-certificates/7844.pem /etc/ssl/certs/7844.pem"
I0120 17:46:14.017993 216535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7844.pem
I0120 17:46:14.021861 216535 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:06 /usr/share/ca-certificates/7844.pem
I0120 17:46:14.021926 216535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7844.pem
I0120 17:46:14.028914 216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7844.pem /etc/ssl/certs/51391683.0"
I0120 17:46:14.038066 216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78442.pem && ln -fs /usr/share/ca-certificates/78442.pem /etc/ssl/certs/78442.pem"
I0120 17:46:14.048940 216535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78442.pem
I0120 17:46:14.052695 216535 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 17:06 /usr/share/ca-certificates/78442.pem
I0120 17:46:14.052764 216535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78442.pem
I0120 17:46:14.059923 216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78442.pem /etc/ssl/certs/3ec20f2e.0"
I0120 17:46:14.069343 216535 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 17:46:14.072975 216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0120 17:46:14.083721 216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0120 17:46:14.091131 216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0120 17:46:14.098125 216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0120 17:46:14.105448 216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0120 17:46:14.112727 216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0120 17:46:14.119900 216535 kubeadm.go:392] StartCluster: {Name:old-k8s-version-145659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 17:46:14.119998 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 17:46:14.120061 216535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 17:46:14.177551 216535 cri.go:89] found id: "b5b9683544505314d199b518eecbc67e62715b40df7019ff4891e9a38610f476"
I0120 17:46:14.177584 216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:46:14.177598 216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:46:14.177602 216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:46:14.177606 216535 cri.go:89] found id: "bdf7abdba408a785c8e38f1cfe1b17928b77ea83bb630a565d01e897434779c3"
I0120 17:46:14.177610 216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:46:14.177616 216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:46:14.177626 216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:46:14.177633 216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:46:14.177639 216535 cri.go:89] found id: ""
I0120 17:46:14.177690 216535 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0120 17:46:14.190780 216535 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T17:46:14Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0120 17:46:14.190859 216535 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 17:46:14.201596 216535 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0120 17:46:14.201616 216535 kubeadm.go:593] restartPrimaryControlPlane start ...
I0120 17:46:14.201668 216535 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0120 17:46:14.212505 216535 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0120 17:46:14.212997 216535 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-145659" does not appear in /home/jenkins/minikube-integration/20109-2518/kubeconfig
I0120 17:46:14.213140 216535 kubeconfig.go:62] /home/jenkins/minikube-integration/20109-2518/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-145659" cluster setting kubeconfig missing "old-k8s-version-145659" context setting]
I0120 17:46:14.213443 216535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/kubeconfig: {Name:mk7eb37afa68734d2ba48fcac1147e4fe5c87411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 17:46:14.214723 216535 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0120 17:46:14.225046 216535 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0120 17:46:14.225097 216535 kubeadm.go:597] duration metric: took 23.47144ms to restartPrimaryControlPlane
I0120 17:46:14.225110 216535 kubeadm.go:394] duration metric: took 105.219257ms to StartCluster
I0120 17:46:14.225135 216535 settings.go:142] acquiring lock: {Name:mk1c7d255bd6ff729fb7f0cda8440d084eb0c286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 17:46:14.225216 216535 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20109-2518/kubeconfig
I0120 17:46:14.225948 216535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/kubeconfig: {Name:mk7eb37afa68734d2ba48fcac1147e4fe5c87411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 17:46:14.226201 216535 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 17:46:14.226540 216535 config.go:182] Loaded profile config "old-k8s-version-145659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 17:46:14.226587 216535 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 17:46:14.226677 216535 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-145659"
I0120 17:46:14.226697 216535 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-145659"
W0120 17:46:14.226709 216535 addons.go:247] addon storage-provisioner should already be in state true
I0120 17:46:14.226740 216535 host.go:66] Checking if "old-k8s-version-145659" exists ...
I0120 17:46:14.227880 216535 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-145659"
I0120 17:46:14.227904 216535 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-145659"
I0120 17:46:14.227944 216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
I0120 17:46:14.228200 216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
I0120 17:46:14.228540 216535 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-145659"
I0120 17:46:14.228573 216535 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-145659"
W0120 17:46:14.228609 216535 addons.go:247] addon metrics-server should already be in state true
I0120 17:46:14.228647 216535 host.go:66] Checking if "old-k8s-version-145659" exists ...
I0120 17:46:14.229117 216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
I0120 17:46:14.230732 216535 addons.go:69] Setting dashboard=true in profile "old-k8s-version-145659"
I0120 17:46:14.230766 216535 addons.go:238] Setting addon dashboard=true in "old-k8s-version-145659"
W0120 17:46:14.230773 216535 addons.go:247] addon dashboard should already be in state true
I0120 17:46:14.230801 216535 host.go:66] Checking if "old-k8s-version-145659" exists ...
I0120 17:46:14.231460 216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
I0120 17:46:14.231853 216535 out.go:177] * Verifying Kubernetes components...
I0120 17:46:14.234732 216535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 17:46:14.291391 216535 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 17:46:14.293958 216535 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 17:46:14.294000 216535 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 17:46:14.294069 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:14.319127 216535 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-145659"
W0120 17:46:14.319152 216535 addons.go:247] addon default-storageclass should already be in state true
I0120 17:46:14.319177 216535 host.go:66] Checking if "old-k8s-version-145659" exists ...
I0120 17:46:14.323918 216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
I0120 17:46:14.327122 216535 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 17:46:14.327289 216535 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 17:46:14.329920 216535 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 17:46:14.330281 216535 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 17:46:14.330307 216535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 17:46:14.330378 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:14.333464 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 17:46:14.333487 216535 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 17:46:14.333554 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:14.383669 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:14.397086 216535 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 17:46:14.397107 216535 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 17:46:14.397167 216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
I0120 17:46:14.403615 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:14.405758 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:14.433152 216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
I0120 17:46:14.462735 216535 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 17:46:14.502474 216535 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-145659" to be "Ready" ...
I0120 17:46:14.592513 216535 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 17:46:14.592536 216535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 17:46:14.629513 216535 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 17:46:14.629599 216535 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 17:46:14.673229 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 17:46:14.698415 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 17:46:14.698499 216535 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 17:46:14.702688 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 17:46:14.723082 216535 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 17:46:14.723162 216535 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 17:46:14.777166 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 17:46:14.777264 216535 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 17:46:14.832906 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 17:46:14.928699 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 17:46:14.928790 216535 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
W0120 17:46:15.076953 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:15.077058 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.077113 216535 retry.go:31] will retry after 304.037714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.077156 216535 retry.go:31] will retry after 144.986778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.079058 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 17:46:15.079132 216535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 17:46:15.126780 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 17:46:15.126863 216535 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W0120 17:46:15.173200 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.173296 216535 retry.go:31] will retry after 212.361676ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.186114 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 17:46:15.186193 216535 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 17:46:15.209757 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 17:46:15.209832 216535 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 17:46:15.223066 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 17:46:15.239112 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 17:46:15.239189 216535 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0120 17:46:15.283855 216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 17:46:15.283929 216535 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0120 17:46:15.316443 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 17:46:15.381631 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 17:46:15.385921 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 17:46:15.475534 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.475565 216535 retry.go:31] will retry after 264.217766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:15.485096 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.485127 216535 retry.go:31] will retry after 292.160269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:15.582210 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.582243 216535 retry.go:31] will retry after 356.191953ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:15.638795 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.638828 216535 retry.go:31] will retry after 369.440037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.740173 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 17:46:15.777647 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 17:46:15.853879 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.853958 216535 retry.go:31] will retry after 804.813849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:15.914178 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.914255 216535 retry.go:31] will retry after 503.149977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:15.938586 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 17:46:16.008458 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 17:46:16.042488 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.042521 216535 retry.go:31] will retry after 646.854109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:16.135647 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.135679 216535 retry.go:31] will retry after 537.353244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.417984 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 17:46:16.503765 216535 node_ready.go:53] error getting node "old-k8s-version-145659": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-145659": dial tcp 192.168.76.2:8443: connect: connection refused
W0120 17:46:16.561798 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.561878 216535 retry.go:31] will retry after 361.333645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.659121 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 17:46:16.673461 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 17:46:16.689816 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 17:46:16.872017 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.872104 216535 retry.go:31] will retry after 1.164701291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:16.913870 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.913947 216535 retry.go:31] will retry after 864.208742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:16.913974 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.913994 216535 retry.go:31] will retry after 757.965934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:16.924138 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 17:46:17.026348 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:17.026377 216535 retry.go:31] will retry after 641.604695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:17.668523 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 17:46:17.672939 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 17:46:17.779299 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 17:46:17.840675 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:17.840708 216535 retry.go:31] will retry after 717.030986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:17.870257 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:17.870289 216535 retry.go:31] will retry after 1.587737674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:17.955559 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:17.955594 216535 retry.go:31] will retry after 1.255434296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:18.037457 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 17:46:18.157248 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:18.157278 216535 retry.go:31] will retry after 738.912551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:18.558229 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 17:46:18.685255 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:18.685290 216535 retry.go:31] will retry after 1.664066253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:18.896525 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 17:46:18.999934 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:18.999963 216535 retry.go:31] will retry after 1.375916985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:19.003602 216535 node_ready.go:53] error getting node "old-k8s-version-145659": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-145659": dial tcp 192.168.76.2:8443: connect: connection refused
I0120 17:46:19.212030 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 17:46:19.310268 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:19.310298 216535 retry.go:31] will retry after 2.229721873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:19.458563 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 17:46:19.578564 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:19.578595 216535 retry.go:31] will retry after 1.790748201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:20.350222 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 17:46:20.376505 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 17:46:20.570451 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:20.570482 216535 retry.go:31] will retry after 1.618998146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 17:46:20.586593 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:20.586642 216535 retry.go:31] will retry after 2.937473882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:21.003780 216535 node_ready.go:53] error getting node "old-k8s-version-145659": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-145659": dial tcp 192.168.76.2:8443: connect: connection refused
I0120 17:46:21.370420 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 17:46:21.488732 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:21.488773 216535 retry.go:31] will retry after 1.507248326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:21.540577 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 17:46:21.674767 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:21.674798 216535 retry.go:31] will retry after 2.288869555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:22.189896 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 17:46:22.363816 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:22.363847 216535 retry.go:31] will retry after 5.437445769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:22.996838 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 17:46:23.209856 216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:23.209888 216535 retry.go:31] will retry after 6.351708828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 17:46:23.503471 216535 node_ready.go:53] error getting node "old-k8s-version-145659": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-145659": dial tcp 192.168.76.2:8443: connect: connection refused
I0120 17:46:23.524698 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 17:46:23.964485 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 17:46:27.802974 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 17:46:29.561779 216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 17:46:33.262437 216535 node_ready.go:49] node "old-k8s-version-145659" has status "Ready":"True"
I0120 17:46:33.262473 216535 node_ready.go:38] duration metric: took 18.759947561s for node "old-k8s-version-145659" to be "Ready" ...
I0120 17:46:33.262485 216535 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 17:46:33.561719 216535 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-gtjp2" in "kube-system" namespace to be "Ready" ...
I0120 17:46:33.697509 216535 pod_ready.go:93] pod "coredns-74ff55c5b-gtjp2" in "kube-system" namespace has status "Ready":"True"
I0120 17:46:33.697535 216535 pod_ready.go:82] duration metric: took 135.786323ms for pod "coredns-74ff55c5b-gtjp2" in "kube-system" namespace to be "Ready" ...
I0120 17:46:33.697547 216535 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:46:34.769152 216535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.24441767s)
I0120 17:46:34.844139 216535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.879608142s)
I0120 17:46:34.844203 216535 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-145659"
I0120 17:46:35.375442 216535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.572421593s)
I0120 17:46:35.375684 216535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.8138478s)
I0120 17:46:35.378616 216535 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-145659 addons enable metrics-server
I0120 17:46:35.381648 216535 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I0120 17:46:35.384693 216535 addons.go:514] duration metric: took 21.158081764s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I0120 17:46:35.704216 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:37.704836 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:40.210059 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:42.704010 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:44.704359 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:47.204306 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:49.204610 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:51.205222 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:53.245457 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:55.704390 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:46:57.720388 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:00.233791 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:02.703201 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:04.706343 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:07.204424 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:09.204459 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:11.205114 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:13.207455 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:15.703999 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:18.206172 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:20.703848 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:22.705047 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:25.203868 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:27.204013 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:29.204819 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:31.205720 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:33.704553 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:36.204347 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:38.204461 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:40.703783 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:42.704192 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:44.704169 216535 pod_ready.go:93] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:44.704198 216535 pod_ready.go:82] duration metric: took 1m11.006642524s for pod "etcd-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:47:44.704215 216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:47:44.709524 216535 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:44.709549 216535 pod_ready.go:82] duration metric: took 5.326555ms for pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:47:44.709561 216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:47:46.720597 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:49.218373 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:51.220240 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:53.731463 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:56.219975 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:58.715969 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:00.716172 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:01.717465 216535 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
I0120 17:48:01.717491 216535 pod_ready.go:82] duration metric: took 17.007921004s for pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.717503 216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxqgj" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.723321 216535 pod_ready.go:93] pod "kube-proxy-mxqgj" in "kube-system" namespace has status "Ready":"True"
I0120 17:48:01.723396 216535 pod_ready.go:82] duration metric: took 5.87229ms for pod "kube-proxy-mxqgj" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.723409 216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.729329 216535 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
I0120 17:48:01.729356 216535 pod_ready.go:82] duration metric: took 5.938522ms for pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.729367 216535 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace to be "Ready" ...
I0120 17:48:03.811502 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:06.253893 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:08.739025 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:11.239058 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:13.736337 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:15.736465 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:18.247835 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:20.747201 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:23.242774 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:25.735545 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:27.736290 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:30.243746 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:32.737127 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:34.737472 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:37.243570 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:39.245847 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:41.736938 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:44.242652 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:46.736378 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:49.243543 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:51.243642 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:53.244529 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:55.245129 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:57.736190 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:00.244816 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:02.245766 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:04.295578 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:06.736622 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:08.737036 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:10.737207 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:13.242704 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:15.735684 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:17.737791 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:20.244523 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:22.244600 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:24.735659 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:26.736790 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:29.250223 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:31.753850 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:34.236244 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:36.243241 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:38.736581 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:40.736828 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:43.238843 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:45.736169 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:47.736599 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:50.244905 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:52.737487 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:54.754561 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:57.236641 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:59.238929 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:01.241741 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:03.242427 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:05.736193 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:07.736416 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:10.240839 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:12.244010 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:14.246547 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:16.737206 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:19.244440 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:21.244729 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:23.736600 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:26.244612 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:28.250474 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:30.739819 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:33.245363 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:35.737773 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:37.742221 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:40.237488 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:42.738257 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:45.239382 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:47.736272 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:50.236202 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:52.239206 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:54.244758 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:56.736346 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:59.237672 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:01.244367 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:03.736783 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:05.737354 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:08.235650 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:10.237001 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:12.237848 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:14.240863 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:16.243349 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:18.737611 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:21.244639 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:23.735945 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:26.242287 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:28.735482 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:30.736321 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:32.736991 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:35.236754 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:37.244823 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:39.735311 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:41.735810 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:43.736169 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:45.742400 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:48.243218 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:50.244231 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:52.244707 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:54.248009 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:56.737674 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:59.241838 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:52:01.244283 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:52:01.736777 216535 pod_ready.go:82] duration metric: took 4m0.007395127s for pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace to be "Ready" ...
E0120 17:52:01.736846 216535 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 17:52:01.736870 216535 pod_ready.go:39] duration metric: took 5m28.474374205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 17:52:01.736899 216535 api_server.go:52] waiting for apiserver process to appear ...
I0120 17:52:01.736964 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 17:52:01.737053 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 17:52:01.781253 216535 cri.go:89] found id: "f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
I0120 17:52:01.781321 216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:52:01.781341 216535 cri.go:89] found id: ""
I0120 17:52:01.781356 216535 logs.go:282] 2 containers: [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e]
I0120 17:52:01.781432 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.785393 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.788792 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 17:52:01.788862 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 17:52:01.833834 216535 cri.go:89] found id: "17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
I0120 17:52:01.833869 216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:52:01.833902 216535 cri.go:89] found id: ""
I0120 17:52:01.833910 216535 logs.go:282] 2 containers: [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec]
I0120 17:52:01.833990 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.838990 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.843467 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 17:52:01.843556 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 17:52:01.886764 216535 cri.go:89] found id: "583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
I0120 17:52:01.886856 216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:52:01.886877 216535 cri.go:89] found id: ""
I0120 17:52:01.886908 216535 logs.go:282] 2 containers: [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc]
I0120 17:52:01.886983 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.891011 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.894775 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 17:52:01.894856 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 17:52:01.949896 216535 cri.go:89] found id: "2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
I0120 17:52:01.949920 216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:52:01.949925 216535 cri.go:89] found id: ""
I0120 17:52:01.949933 216535 logs.go:282] 2 containers: [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90]
I0120 17:52:01.949992 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.954296 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.958371 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 17:52:01.958506 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 17:52:02.018621 216535 cri.go:89] found id: "dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
I0120 17:52:02.018645 216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:52:02.018650 216535 cri.go:89] found id: ""
I0120 17:52:02.018657 216535 logs.go:282] 2 containers: [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42]
I0120 17:52:02.018714 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.023690 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.028696 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 17:52:02.028860 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 17:52:02.096051 216535 cri.go:89] found id: "c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
I0120 17:52:02.096073 216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:52:02.096078 216535 cri.go:89] found id: ""
I0120 17:52:02.096085 216535 logs.go:282] 2 containers: [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f]
I0120 17:52:02.096149 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.100993 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.106917 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 17:52:02.106990 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 17:52:02.174049 216535 cri.go:89] found id: "6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
I0120 17:52:02.174080 216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:52:02.174086 216535 cri.go:89] found id: ""
I0120 17:52:02.174093 216535 logs.go:282] 2 containers: [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f]
I0120 17:52:02.174145 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.179127 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.184826 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 17:52:02.184901 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 17:52:02.254018 216535 cri.go:89] found id: "9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
I0120 17:52:02.254041 216535 cri.go:89] found id: ""
I0120 17:52:02.254049 216535 logs.go:282] 1 containers: [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8]
I0120 17:52:02.254122 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.260217 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 17:52:02.260276 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 17:52:02.316256 216535 cri.go:89] found id: "027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
I0120 17:52:02.316280 216535 cri.go:89] found id: "91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
I0120 17:52:02.316286 216535 cri.go:89] found id: ""
I0120 17:52:02.316293 216535 logs.go:282] 2 containers: [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd]
I0120 17:52:02.316352 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.321766 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.327502 216535 logs.go:123] Gathering logs for dmesg ...
I0120 17:52:02.327525 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 17:52:02.343747 216535 logs.go:123] Gathering logs for describe nodes ...
I0120 17:52:02.343778 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 17:52:02.674989 216535 logs.go:123] Gathering logs for kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] ...
I0120 17:52:02.675019 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:52:02.739409 216535 logs.go:123] Gathering logs for kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] ...
I0120 17:52:02.739429 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
I0120 17:52:02.805987 216535 logs.go:123] Gathering logs for kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] ...
I0120 17:52:02.806072 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:52:02.862091 216535 logs.go:123] Gathering logs for kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] ...
I0120 17:52:02.862117 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
I0120 17:52:02.952148 216535 logs.go:123] Gathering logs for storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] ...
I0120 17:52:02.952223 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
I0120 17:52:03.020765 216535 logs.go:123] Gathering logs for container status ...
I0120 17:52:03.020815 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 17:52:03.090382 216535 logs.go:123] Gathering logs for kubelet ...
I0120 17:52:03.090580 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 17:52:03.161589 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:34 old-k8s-version-145659 kubelet[662]: E0120 17:46:34.880251 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.161853 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:35 old-k8s-version-145659 kubelet[662]: E0120 17:46:35.605048 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.165125 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:50 old-k8s-version-145659 kubelet[662]: E0120 17:46:50.413085 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.167727 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:03 old-k8s-version-145659 kubelet[662]: E0120 17:47:03.698813 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.167958 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.404037 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.168311 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.706245 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.168784 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.711644 662 pod_workers.go:191] Error syncing pod ceb78d8f-604f-44e7-a643-6a7788c747ae ("storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"
W0120 17:52:03.169139 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.712757 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.170224 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:18 old-k8s-version-145659 kubelet[662]: E0120 17:47:18.760650 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.172926 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:19 old-k8s-version-145659 kubelet[662]: E0120 17:47:19.413053 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.173303 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:23 old-k8s-version-145659 kubelet[662]: E0120 17:47:23.877153 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.173514 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:31 old-k8s-version-145659 kubelet[662]: E0120 17:47:31.403908 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.173865 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:36 old-k8s-version-145659 kubelet[662]: E0120 17:47:36.403402 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.174073 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:46 old-k8s-version-145659 kubelet[662]: E0120 17:47:46.412253 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.174688 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:48 old-k8s-version-145659 kubelet[662]: E0120 17:47:48.845203 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.175052 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:53 old-k8s-version-145659 kubelet[662]: E0120 17:47:53.876712 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.175261 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:58 old-k8s-version-145659 kubelet[662]: E0120 17:47:58.411076 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.175632 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:06 old-k8s-version-145659 kubelet[662]: E0120 17:48:06.403375 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.178118 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:12 old-k8s-version-145659 kubelet[662]: E0120 17:48:12.422259 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.178583 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:21 old-k8s-version-145659 kubelet[662]: E0120 17:48:21.403254 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.178770 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:25 old-k8s-version-145659 kubelet[662]: E0120 17:48:25.404070 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.179381 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:34 old-k8s-version-145659 kubelet[662]: E0120 17:48:34.988709 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.179564 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:39 old-k8s-version-145659 kubelet[662]: E0120 17:48:39.403769 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.179889 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:43 old-k8s-version-145659 kubelet[662]: E0120 17:48:43.877519 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.180070 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:53 old-k8s-version-145659 kubelet[662]: E0120 17:48:53.403792 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.180396 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:58 old-k8s-version-145659 kubelet[662]: E0120 17:48:58.408685 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.180579 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:06 old-k8s-version-145659 kubelet[662]: E0120 17:49:06.403734 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.180905 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:11 old-k8s-version-145659 kubelet[662]: E0120 17:49:11.403959 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.181086 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:18 old-k8s-version-145659 kubelet[662]: E0120 17:49:18.408125 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.181407 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:24 old-k8s-version-145659 kubelet[662]: E0120 17:49:24.407972 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.181587 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:30 old-k8s-version-145659 kubelet[662]: E0120 17:49:30.404331 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.181909 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:37 old-k8s-version-145659 kubelet[662]: E0120 17:49:37.403265 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.184453 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:45 old-k8s-version-145659 kubelet[662]: E0120 17:49:45.414508 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.184816 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:48 old-k8s-version-145659 kubelet[662]: E0120 17:49:48.403936 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.185031 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:00 old-k8s-version-145659 kubelet[662]: E0120 17:50:00.404116 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.185681 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:04 old-k8s-version-145659 kubelet[662]: E0120 17:50:04.268511 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.185896 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:12 old-k8s-version-145659 kubelet[662]: E0120 17:50:12.407685 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.186251 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:13 old-k8s-version-145659 kubelet[662]: E0120 17:50:13.876917 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.186463 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:25 old-k8s-version-145659 kubelet[662]: E0120 17:50:25.403750 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.186830 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:28 old-k8s-version-145659 kubelet[662]: E0120 17:50:28.405640 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.187051 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:40 old-k8s-version-145659 kubelet[662]: E0120 17:50:40.403822 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.187407 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.187689 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.188047 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.188255 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.188613 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.188828 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.189195 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.189403 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.189758 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.189969 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.190324 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.190536 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.190894 216535 logs.go:138] Found kubelet problem: Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
I0120 17:52:03.190919 216535 logs.go:123] Gathering logs for etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] ...
I0120 17:52:03.190947 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
I0120 17:52:03.259910 216535 logs.go:123] Gathering logs for kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] ...
I0120 17:52:03.259991 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
I0120 17:52:03.317942 216535 logs.go:123] Gathering logs for kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] ...
I0120 17:52:03.318013 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:52:03.380525 216535 logs.go:123] Gathering logs for kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] ...
I0120 17:52:03.380608 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:52:03.453396 216535 logs.go:123] Gathering logs for coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] ...
I0120 17:52:03.453442 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
I0120 17:52:03.506945 216535 logs.go:123] Gathering logs for coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] ...
I0120 17:52:03.506974 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:52:03.555548 216535 logs.go:123] Gathering logs for kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] ...
I0120 17:52:03.555628 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
I0120 17:52:03.674894 216535 logs.go:123] Gathering logs for storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] ...
I0120 17:52:03.674971 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
I0120 17:52:03.746584 216535 logs.go:123] Gathering logs for containerd ...
I0120 17:52:03.746608 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 17:52:03.830076 216535 logs.go:123] Gathering logs for kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] ...
I0120 17:52:03.830148 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
I0120 17:52:03.938308 216535 logs.go:123] Gathering logs for etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] ...
I0120 17:52:03.938397 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:52:04.023242 216535 logs.go:123] Gathering logs for kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] ...
I0120 17:52:04.023376 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
I0120 17:52:04.093186 216535 logs.go:123] Gathering logs for kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] ...
I0120 17:52:04.093218 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:52:04.203549 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:52:04.203705 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 17:52:04.203798 216535 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0120 17:52:04.203843 216535 out.go:270] Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:04.203889 216535 out.go:270] Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:04.203925 216535 out.go:270] Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:04.203955 216535 out.go:270] Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:04.203988 216535 out.go:270] Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
I0120 17:52:04.204019 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:52:04.204048 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:52:14.204540 216535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 17:52:14.216885 216535 api_server.go:72] duration metric: took 5m59.990640844s to wait for apiserver process to appear ...
I0120 17:52:14.216913 216535 api_server.go:88] waiting for apiserver healthz status ...
I0120 17:52:14.216952 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 17:52:14.217012 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 17:52:14.275816 216535 cri.go:89] found id: "f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
I0120 17:52:14.275838 216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:52:14.275843 216535 cri.go:89] found id: ""
I0120 17:52:14.275850 216535 logs.go:282] 2 containers: [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e]
I0120 17:52:14.275981 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.280911 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.284620 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 17:52:14.284694 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 17:52:14.324506 216535 cri.go:89] found id: "17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
I0120 17:52:14.324530 216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:52:14.324536 216535 cri.go:89] found id: ""
I0120 17:52:14.324544 216535 logs.go:282] 2 containers: [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec]
I0120 17:52:14.324602 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.328307 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.331742 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 17:52:14.331812 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 17:52:14.375892 216535 cri.go:89] found id: "583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
I0120 17:52:14.375913 216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:52:14.375919 216535 cri.go:89] found id: ""
I0120 17:52:14.375926 216535 logs.go:282] 2 containers: [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc]
I0120 17:52:14.376011 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.379798 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.383248 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 17:52:14.383317 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 17:52:14.431319 216535 cri.go:89] found id: "2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
I0120 17:52:14.431376 216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:52:14.431382 216535 cri.go:89] found id: ""
I0120 17:52:14.431388 216535 logs.go:282] 2 containers: [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90]
I0120 17:52:14.431444 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.435015 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.438536 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 17:52:14.438604 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 17:52:14.483659 216535 cri.go:89] found id: "dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
I0120 17:52:14.483691 216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:52:14.483697 216535 cri.go:89] found id: ""
I0120 17:52:14.483703 216535 logs.go:282] 2 containers: [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42]
I0120 17:52:14.483778 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.487550 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.491261 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 17:52:14.491399 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 17:52:14.537554 216535 cri.go:89] found id: "c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
I0120 17:52:14.537574 216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:52:14.537580 216535 cri.go:89] found id: ""
I0120 17:52:14.537587 216535 logs.go:282] 2 containers: [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f]
I0120 17:52:14.537645 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.541369 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.544958 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 17:52:14.545047 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 17:52:14.582569 216535 cri.go:89] found id: "6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
I0120 17:52:14.582592 216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:52:14.582598 216535 cri.go:89] found id: ""
I0120 17:52:14.582605 216535 logs.go:282] 2 containers: [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f]
I0120 17:52:14.582683 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.586500 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.590053 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 17:52:14.590126 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 17:52:14.663263 216535 cri.go:89] found id: "027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
I0120 17:52:14.663283 216535 cri.go:89] found id: "91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
I0120 17:52:14.663289 216535 cri.go:89] found id: ""
I0120 17:52:14.663296 216535 logs.go:282] 2 containers: [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd]
I0120 17:52:14.663372 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.666867 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.672075 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 17:52:14.672174 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 17:52:14.720019 216535 cri.go:89] found id: "9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
I0120 17:52:14.720042 216535 cri.go:89] found id: ""
I0120 17:52:14.720054 216535 logs.go:282] 1 containers: [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8]
I0120 17:52:14.720116 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.723774 216535 logs.go:123] Gathering logs for kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] ...
I0120 17:52:14.723800 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
I0120 17:52:14.773380 216535 logs.go:123] Gathering logs for storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] ...
I0120 17:52:14.773417 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
I0120 17:52:14.816814 216535 logs.go:123] Gathering logs for kubelet ...
I0120 17:52:14.816842 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 17:52:14.876608 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:34 old-k8s-version-145659 kubelet[662]: E0120 17:46:34.880251 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.876839 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:35 old-k8s-version-145659 kubelet[662]: E0120 17:46:35.605048 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.879700 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:50 old-k8s-version-145659 kubelet[662]: E0120 17:46:50.413085 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.883739 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:03 old-k8s-version-145659 kubelet[662]: E0120 17:47:03.698813 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.883950 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.404037 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.884282 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.706245 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.884720 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.711644 662 pod_workers.go:191] Error syncing pod ceb78d8f-604f-44e7-a643-6a7788c747ae ("storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"
W0120 17:52:14.885047 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.712757 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.886100 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:18 old-k8s-version-145659 kubelet[662]: E0120 17:47:18.760650 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.888645 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:19 old-k8s-version-145659 kubelet[662]: E0120 17:47:19.413053 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.889002 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:23 old-k8s-version-145659 kubelet[662]: E0120 17:47:23.877153 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.889194 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:31 old-k8s-version-145659 kubelet[662]: E0120 17:47:31.403908 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.889559 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:36 old-k8s-version-145659 kubelet[662]: E0120 17:47:36.403402 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.889746 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:46 old-k8s-version-145659 kubelet[662]: E0120 17:47:46.412253 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.890333 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:48 old-k8s-version-145659 kubelet[662]: E0120 17:47:48.845203 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.890660 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:53 old-k8s-version-145659 kubelet[662]: E0120 17:47:53.876712 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.890848 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:58 old-k8s-version-145659 kubelet[662]: E0120 17:47:58.411076 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.891179 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:06 old-k8s-version-145659 kubelet[662]: E0120 17:48:06.403375 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.893674 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:12 old-k8s-version-145659 kubelet[662]: E0120 17:48:12.422259 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.894035 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:21 old-k8s-version-145659 kubelet[662]: E0120 17:48:21.403254 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.894400 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:25 old-k8s-version-145659 kubelet[662]: E0120 17:48:25.404070 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.895006 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:34 old-k8s-version-145659 kubelet[662]: E0120 17:48:34.988709 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.895192 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:39 old-k8s-version-145659 kubelet[662]: E0120 17:48:39.403769 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.895564 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:43 old-k8s-version-145659 kubelet[662]: E0120 17:48:43.877519 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.895751 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:53 old-k8s-version-145659 kubelet[662]: E0120 17:48:53.403792 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.896077 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:58 old-k8s-version-145659 kubelet[662]: E0120 17:48:58.408685 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.896260 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:06 old-k8s-version-145659 kubelet[662]: E0120 17:49:06.403734 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.896584 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:11 old-k8s-version-145659 kubelet[662]: E0120 17:49:11.403959 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.896768 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:18 old-k8s-version-145659 kubelet[662]: E0120 17:49:18.408125 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.897094 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:24 old-k8s-version-145659 kubelet[662]: E0120 17:49:24.407972 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.897306 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:30 old-k8s-version-145659 kubelet[662]: E0120 17:49:30.404331 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.897633 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:37 old-k8s-version-145659 kubelet[662]: E0120 17:49:37.403265 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.900069 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:45 old-k8s-version-145659 kubelet[662]: E0120 17:49:45.414508 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.900399 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:48 old-k8s-version-145659 kubelet[662]: E0120 17:49:48.403936 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.900588 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:00 old-k8s-version-145659 kubelet[662]: E0120 17:50:00.404116 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.901175 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:04 old-k8s-version-145659 kubelet[662]: E0120 17:50:04.268511 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.901358 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:12 old-k8s-version-145659 kubelet[662]: E0120 17:50:12.407685 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.901683 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:13 old-k8s-version-145659 kubelet[662]: E0120 17:50:13.876917 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.901866 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:25 old-k8s-version-145659 kubelet[662]: E0120 17:50:25.403750 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.902191 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:28 old-k8s-version-145659 kubelet[662]: E0120 17:50:28.405640 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.902379 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:40 old-k8s-version-145659 kubelet[662]: E0120 17:50:40.403822 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.902706 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.902892 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.903219 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.903413 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.903739 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.903923 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.904249 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.904433 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.904758 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.904944 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.905272 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.905457 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.905785 216535 logs.go:138] Found kubelet problem: Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.905970 216535 logs.go:138] Found kubelet problem: Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.906299 216535 logs.go:138] Found kubelet problem: Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
I0120 17:52:14.906310 216535 logs.go:123] Gathering logs for kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] ...
I0120 17:52:14.906325 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
I0120 17:52:14.972580 216535 logs.go:123] Gathering logs for coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] ...
I0120 17:52:14.972618 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
I0120 17:52:15.024121 216535 logs.go:123] Gathering logs for containerd ...
I0120 17:52:15.024165 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 17:52:15.100734 216535 logs.go:123] Gathering logs for describe nodes ...
I0120 17:52:15.100774 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 17:52:15.284993 216535 logs.go:123] Gathering logs for coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] ...
I0120 17:52:15.285026 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:52:15.335235 216535 logs.go:123] Gathering logs for kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] ...
I0120 17:52:15.335264 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:52:15.374772 216535 logs.go:123] Gathering logs for storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] ...
I0120 17:52:15.374806 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
I0120 17:52:15.433634 216535 logs.go:123] Gathering logs for container status ...
I0120 17:52:15.433663 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 17:52:15.488059 216535 logs.go:123] Gathering logs for etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] ...
I0120 17:52:15.488091 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
I0120 17:52:15.542254 216535 logs.go:123] Gathering logs for kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] ...
I0120 17:52:15.542284 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
I0120 17:52:15.582486 216535 logs.go:123] Gathering logs for kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] ...
I0120 17:52:15.582513 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:52:15.660944 216535 logs.go:123] Gathering logs for kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] ...
I0120 17:52:15.661023 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:52:15.709672 216535 logs.go:123] Gathering logs for kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] ...
I0120 17:52:15.709763 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
I0120 17:52:15.755613 216535 logs.go:123] Gathering logs for kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] ...
I0120 17:52:15.755647 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:52:15.794100 216535 logs.go:123] Gathering logs for kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] ...
I0120 17:52:15.794126 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
I0120 17:52:15.876898 216535 logs.go:123] Gathering logs for kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] ...
I0120 17:52:15.876935 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
I0120 17:52:15.937814 216535 logs.go:123] Gathering logs for dmesg ...
I0120 17:52:15.937842 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 17:52:15.955450 216535 logs.go:123] Gathering logs for kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] ...
I0120 17:52:15.955481 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:52:16.047655 216535 logs.go:123] Gathering logs for etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] ...
I0120 17:52:16.047691 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:52:16.094113 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:52:16.094145 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 17:52:16.094250 216535 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0120 17:52:16.094269 216535 out.go:270] Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:16.094283 216535 out.go:270] Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:16.094294 216535 out.go:270] Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:16.094301 216535 out.go:270] Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:16.094307 216535 out.go:270] Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
I0120 17:52:16.094313 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:52:16.094320 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:52:26.095908 216535 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0120 17:52:26.165226 216535 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0120 17:52:26.168436 216535 out.go:201]
W0120 17:52:26.171235 216535 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0120 17:52:26.171279 216535 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0120 17:52:26.171300 216535 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0120 17:52:26.171306 216535 out.go:270] *
*
W0120 17:52:26.172503 216535 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0120 17:52:26.175703 216535 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-145659 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-145659
helpers_test.go:235: (dbg) docker inspect old-k8s-version-145659:
-- stdout --
[
{
"Id": "68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2",
"Created": "2025-01-20T17:43:12.54738171Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 216800,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-01-20T17:46:06.431464742Z",
"FinishedAt": "2025-01-20T17:46:05.303575018Z"
},
"Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
"ResolvConfPath": "/var/lib/docker/containers/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2/hostname",
"HostsPath": "/var/lib/docker/containers/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2/hosts",
"LogPath": "/var/lib/docker/containers/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2-json.log",
"Name": "/old-k8s-version-145659",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-145659:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-145659",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/dbca99fa5f002a2c275f8a6dcc3ecf11fb45907a2a29cad3b384ad85f73eae59-init/diff:/var/lib/docker/overlay2/9b176083dace6a900153a2b6e94fac06a5680ba9c3cc84680719d1cb51350052/diff",
"MergedDir": "/var/lib/docker/overlay2/dbca99fa5f002a2c275f8a6dcc3ecf11fb45907a2a29cad3b384ad85f73eae59/merged",
"UpperDir": "/var/lib/docker/overlay2/dbca99fa5f002a2c275f8a6dcc3ecf11fb45907a2a29cad3b384ad85f73eae59/diff",
"WorkDir": "/var/lib/docker/overlay2/dbca99fa5f002a2c275f8a6dcc3ecf11fb45907a2a29cad3b384ad85f73eae59/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "old-k8s-version-145659",
"Source": "/var/lib/docker/volumes/old-k8s-version-145659/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "old-k8s-version-145659",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-145659",
"name.minikube.sigs.k8s.io": "old-k8s-version-145659",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "a9624f41dcb0188ab87b3e9fd8b2712388e69a5f22bcce69e5a74569b21564d0",
"SandboxKey": "/var/run/docker/netns/a9624f41dcb0",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33063"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33064"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33067"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33065"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33066"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-145659": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "86eb91404aba21fb833837cdd78311917e6f4c87eaa2c3ae30f9551926747b07",
"EndpointID": "616b0be2d30f8285abb9cde0029dfeba76553216f02ad6ad743bcf537f05547a",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-145659",
"68f5886dcfe3"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145659 -n old-k8s-version-145659
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-145659 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-145659 logs -n 25: (3.015713004s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| ssh | force-systemd-flag-832515 | force-systemd-flag-832515 | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-flag-832515 | force-systemd-flag-832515 | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
| start | -p cert-expiration-156373 | cert-expiration-156373 | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-062715 | force-systemd-env-062715 | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-062715 | force-systemd-env-062715 | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
| start | -p cert-options-779915 | cert-options-779915 | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:43 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-779915 ssh | cert-options-779915 | jenkins | v1.35.0 | 20 Jan 25 17:43 UTC | 20 Jan 25 17:43 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-779915 -- sudo | cert-options-779915 | jenkins | v1.35.0 | 20 Jan 25 17:43 UTC | 20 Jan 25 17:43 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-779915 | cert-options-779915 | jenkins | v1.35.0 | 20 Jan 25 17:43 UTC | 20 Jan 25 17:43 UTC |
| start | -p old-k8s-version-145659 | old-k8s-version-145659 | jenkins | v1.35.0 | 20 Jan 25 17:43 UTC | 20 Jan 25 17:45 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-156373 | cert-expiration-156373 | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:45 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| addons | enable metrics-server -p old-k8s-version-145659 | old-k8s-version-145659 | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:45 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-145659 | old-k8s-version-145659 | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:46 UTC |
| | --alsologtostderr -v=3 | | | | | |
| delete | -p cert-expiration-156373 | cert-expiration-156373 | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:45 UTC |
| start | -p embed-certs-698725 | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:47 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
| addons | enable dashboard -p old-k8s-version-145659 | old-k8s-version-145659 | jenkins | v1.35.0 | 20 Jan 25 17:46 UTC | 20 Jan 25 17:46 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-145659 | old-k8s-version-145659 | jenkins | v1.35.0 | 20 Jan 25 17:46 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p embed-certs-698725 | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:47 UTC | 20 Jan 25 17:47 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p embed-certs-698725 | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:47 UTC | 20 Jan 25 17:47 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p embed-certs-698725 | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:47 UTC | 20 Jan 25 17:47 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-698725 | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:47 UTC | 20 Jan 25 17:52 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
| image | embed-certs-698725 image list | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:52 UTC | 20 Jan 25 17:52 UTC |
| | --format=json | | | | | |
| pause | -p embed-certs-698725 | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:52 UTC | 20 Jan 25 17:52 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p embed-certs-698725 | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:52 UTC | 20 Jan 25 17:52 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p embed-certs-698725 | embed-certs-698725 | jenkins | v1.35.0 | 20 Jan 25 17:52 UTC | |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/20 17:47:43
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.4 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0120 17:47:43.016426 222240 out.go:345] Setting OutFile to fd 1 ...
I0120 17:47:43.016699 222240 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:47:43.016873 222240 out.go:358] Setting ErrFile to fd 2...
I0120 17:47:43.016886 222240 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:47:43.017221 222240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
I0120 17:47:43.017747 222240 out.go:352] Setting JSON to false
I0120 17:47:43.019057 222240 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5407,"bootTime":1737389856,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0120 17:47:43.019215 222240 start.go:139] virtualization:
I0120 17:47:43.022633 222240 out.go:177] * [embed-certs-698725] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0120 17:47:43.026419 222240 out.go:177] - MINIKUBE_LOCATION=20109
I0120 17:47:43.026536 222240 notify.go:220] Checking for updates...
I0120 17:47:43.032725 222240 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 17:47:43.035753 222240 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
I0120 17:47:43.038581 222240 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
I0120 17:47:43.041469 222240 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0120 17:47:43.044258 222240 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 17:47:43.047901 222240 config.go:182] Loaded profile config "embed-certs-698725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:47:43.048455 222240 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 17:47:43.070041 222240 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
I0120 17:47:43.070174 222240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 17:47:43.137590 222240 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 17:47:43.128169818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 17:47:43.137700 222240 docker.go:318] overlay module found
I0120 17:47:43.140983 222240 out.go:177] * Using the docker driver based on existing profile
I0120 17:47:43.143779 222240 start.go:297] selected driver: docker
I0120 17:47:43.143802 222240 start.go:901] validating driver "docker" against &{Name:embed-certs-698725 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 17:47:43.143924 222240 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 17:47:43.144640 222240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 17:47:43.200514 222240 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 17:47:43.189158118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 17:47:43.202585 222240 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 17:47:43.202619 222240 cni.go:84] Creating CNI manager for ""
I0120 17:47:43.202681 222240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 17:47:43.202718 222240 start.go:340] cluster config:
{Name:embed-certs-698725 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 17:47:43.206163 222240 out.go:177] * Starting "embed-certs-698725" primary control-plane node in "embed-certs-698725" cluster
I0120 17:47:43.209083 222240 cache.go:121] Beginning downloading kic base image for docker with containerd
I0120 17:47:43.212024 222240 out.go:177] * Pulling base image v0.0.46 ...
I0120 17:47:43.214886 222240 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 17:47:43.214948 222240 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
I0120 17:47:43.214960 222240 cache.go:56] Caching tarball of preloaded images
I0120 17:47:43.214989 222240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0120 17:47:43.215097 222240 preload.go:172] Found /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0120 17:47:43.215109 222240 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
I0120 17:47:43.215239 222240 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/config.json ...
I0120 17:47:43.246059 222240 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0120 17:47:43.246079 222240 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0120 17:47:43.246103 222240 cache.go:227] Successfully downloaded all kic artifacts
I0120 17:47:43.246136 222240 start.go:360] acquireMachinesLock for embed-certs-698725: {Name:mkb9032627882ab94f8c709279fd09e6fbf6e44e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 17:47:43.246201 222240 start.go:364] duration metric: took 47.696µs to acquireMachinesLock for "embed-certs-698725"
I0120 17:47:43.246222 222240 start.go:96] Skipping create...Using existing machine configuration
I0120 17:47:43.246227 222240 fix.go:54] fixHost starting:
I0120 17:47:43.246483 222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
I0120 17:47:43.264178 222240 fix.go:112] recreateIfNeeded on embed-certs-698725: state=Stopped err=<nil>
W0120 17:47:43.264213 222240 fix.go:138] unexpected machine state, will restart: <nil>
I0120 17:47:43.267550 222240 out.go:177] * Restarting existing docker container for "embed-certs-698725" ...
I0120 17:47:42.704192 216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:44.704169 216535 pod_ready.go:93] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:44.704198 216535 pod_ready.go:82] duration metric: took 1m11.006642524s for pod "etcd-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:47:44.704215 216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:47:44.709524 216535 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:44.709549 216535 pod_ready.go:82] duration metric: took 5.326555ms for pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:47:44.709561 216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:47:43.270488 222240 cli_runner.go:164] Run: docker start embed-certs-698725
I0120 17:47:43.601651 222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
I0120 17:47:43.626806 222240 kic.go:430] container "embed-certs-698725" state is running.
I0120 17:47:43.627311 222240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-698725
I0120 17:47:43.653895 222240 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/config.json ...
I0120 17:47:43.654118 222240 machine.go:93] provisionDockerMachine start ...
I0120 17:47:43.654179 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:43.680957 222240 main.go:141] libmachine: Using SSH client type: native
I0120 17:47:43.681264 222240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0120 17:47:43.681274 222240 main.go:141] libmachine: About to run SSH command:
hostname
I0120 17:47:43.682064 222240 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0120 17:47:46.806927 222240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-698725
I0120 17:47:46.806958 222240 ubuntu.go:169] provisioning hostname "embed-certs-698725"
I0120 17:47:46.807021 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:46.824726 222240 main.go:141] libmachine: Using SSH client type: native
I0120 17:47:46.825025 222240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0120 17:47:46.825046 222240 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-698725 && echo "embed-certs-698725" | sudo tee /etc/hostname
I0120 17:47:46.968695 222240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-698725
I0120 17:47:46.968778 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:46.987494 222240 main.go:141] libmachine: Using SSH client type: native
I0120 17:47:46.987771 222240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0120 17:47:46.987794 222240 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-698725' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-698725/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-698725' | sudo tee -a /etc/hosts;
fi
fi
I0120 17:47:47.111545 222240 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 17:47:47.111570 222240 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2518/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2518/.minikube}
I0120 17:47:47.111606 222240 ubuntu.go:177] setting up certificates
I0120 17:47:47.111616 222240 provision.go:84] configureAuth start
I0120 17:47:47.111686 222240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-698725
I0120 17:47:47.128644 222240 provision.go:143] copyHostCerts
I0120 17:47:47.128713 222240 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem, removing ...
I0120 17:47:47.128726 222240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem
I0120 17:47:47.128800 222240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem (1082 bytes)
I0120 17:47:47.128895 222240 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem, removing ...
I0120 17:47:47.128905 222240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem
I0120 17:47:47.128931 222240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem (1123 bytes)
I0120 17:47:47.128987 222240 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem, removing ...
I0120 17:47:47.128996 222240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem
I0120 17:47:47.129025 222240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem (1679 bytes)
I0120 17:47:47.129078 222240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem org=jenkins.embed-certs-698725 san=[127.0.0.1 192.168.85.2 embed-certs-698725 localhost minikube]
I0120 17:47:48.350585 222240 provision.go:177] copyRemoteCerts
I0120 17:47:48.350662 222240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 17:47:48.350704 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:48.368996 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:48.461392 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0120 17:47:48.495521 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0120 17:47:48.534237 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 17:47:48.566492 222240 provision.go:87] duration metric: took 1.454844886s to configureAuth
I0120 17:47:48.566568 222240 ubuntu.go:193] setting minikube options for container-runtime
I0120 17:47:48.566803 222240 config.go:182] Loaded profile config "embed-certs-698725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:47:48.566833 222240 machine.go:96] duration metric: took 4.912698605s to provisionDockerMachine
I0120 17:47:48.566854 222240 start.go:293] postStartSetup for "embed-certs-698725" (driver="docker")
I0120 17:47:48.566876 222240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 17:47:48.566952 222240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 17:47:48.567010 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:48.585796 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:48.681128 222240 ssh_runner.go:195] Run: cat /etc/os-release
I0120 17:47:48.684731 222240 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 17:47:48.684812 222240 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 17:47:48.684830 222240 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 17:47:48.684838 222240 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0120 17:47:48.684849 222240 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2518/.minikube/addons for local assets ...
I0120 17:47:48.684908 222240 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2518/.minikube/files for local assets ...
I0120 17:47:48.684999 222240 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem -> 78442.pem in /etc/ssl/certs
I0120 17:47:48.685116 222240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 17:47:48.694367 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem --> /etc/ssl/certs/78442.pem (1708 bytes)
I0120 17:47:48.724378 222240 start.go:296] duration metric: took 157.498225ms for postStartSetup
I0120 17:47:48.724505 222240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 17:47:48.724598 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:48.741729 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:48.829412 222240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0120 17:47:48.834247 222240 fix.go:56] duration metric: took 5.588011932s for fixHost
I0120 17:47:48.834285 222240 start.go:83] releasing machines lock for "embed-certs-698725", held for 5.588062148s
I0120 17:47:48.834362 222240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-698725
I0120 17:47:48.856790 222240 ssh_runner.go:195] Run: cat /version.json
I0120 17:47:48.856846 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:48.856791 222240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 17:47:48.857002 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:48.902458 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:48.902714 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:49.139656 222240 ssh_runner.go:195] Run: systemctl --version
I0120 17:47:49.144292 222240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0120 17:47:49.148783 222240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0120 17:47:49.167194 222240 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0120 17:47:49.167276 222240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 17:47:49.176321 222240 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0120 17:47:49.176357 222240 start.go:495] detecting cgroup driver to use...
I0120 17:47:49.176389 222240 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0120 17:47:49.176441 222240 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 17:47:49.191275 222240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 17:47:49.203898 222240 docker.go:217] disabling cri-docker service (if available) ...
I0120 17:47:49.203967 222240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 17:47:49.219102 222240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 17:47:49.232158 222240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 17:47:49.326287 222240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 17:47:49.414084 222240 docker.go:233] disabling docker service ...
I0120 17:47:49.414159 222240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 17:47:49.428166 222240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 17:47:49.440037 222240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 17:47:49.544699 222240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 17:47:49.624991 222240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 17:47:49.636913 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 17:47:49.654784 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0120 17:47:49.665476 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 17:47:49.676944 222240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 17:47:49.677017 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 17:47:49.688153 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 17:47:49.701919 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 17:47:49.719262 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 17:47:49.729492 222240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 17:47:49.739189 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 17:47:49.750351 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0120 17:47:49.760763 222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0120 17:47:49.772111 222240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 17:47:49.782071 222240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 17:47:49.790910 222240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 17:47:49.870602 222240 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 17:47:50.055722 222240 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 17:47:50.055853 222240 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 17:47:50.060344 222240 start.go:563] Will wait 60s for crictl version
I0120 17:47:50.060459 222240 ssh_runner.go:195] Run: which crictl
I0120 17:47:50.064168 222240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 17:47:50.110865 222240 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0120 17:47:50.110968 222240 ssh_runner.go:195] Run: containerd --version
I0120 17:47:50.136248 222240 ssh_runner.go:195] Run: containerd --version
I0120 17:47:50.182377 222240 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.24 ...
I0120 17:47:46.720597 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:49.218373 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:50.185254 222240 cli_runner.go:164] Run: docker network inspect embed-certs-698725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 17:47:50.205731 222240 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0120 17:47:50.209580 222240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 17:47:50.222845 222240 kubeadm.go:883] updating cluster {Name:embed-certs-698725 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 17:47:50.222959 222240 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 17:47:50.223017 222240 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 17:47:50.268222 222240 containerd.go:627] all images are preloaded for containerd runtime.
I0120 17:47:50.268246 222240 containerd.go:534] Images already preloaded, skipping extraction
I0120 17:47:50.268305 222240 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 17:47:50.310524 222240 containerd.go:627] all images are preloaded for containerd runtime.
I0120 17:47:50.310547 222240 cache_images.go:84] Images are preloaded, skipping loading
I0120 17:47:50.310556 222240 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.0 containerd true true} ...
I0120 17:47:50.310697 222240 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-698725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 17:47:50.310768 222240 ssh_runner.go:195] Run: sudo crictl info
I0120 17:47:50.352774 222240 cni.go:84] Creating CNI manager for ""
I0120 17:47:50.352796 222240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 17:47:50.352807 222240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 17:47:50.352831 222240 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-698725 NodeName:embed-certs-698725 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0120 17:47:50.352947 222240 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-698725"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 17:47:50.353017 222240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I0120 17:47:50.362779 222240 binaries.go:44] Found k8s binaries, skipping transfer
I0120 17:47:50.362850 222240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 17:47:50.372095 222240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0120 17:47:50.389782 222240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 17:47:50.408019 222240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0120 17:47:50.427095 222240 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0120 17:47:50.430454 222240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 17:47:50.441590 222240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 17:47:50.541022 222240 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 17:47:50.563660 222240 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725 for IP: 192.168.85.2
I0120 17:47:50.563717 222240 certs.go:194] generating shared ca certs ...
I0120 17:47:50.563737 222240 certs.go:226] acquiring lock for ca certs: {Name:mk409d9cbe30328f0e66b0d712629bd4b02b995b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 17:47:50.564131 222240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2518/.minikube/ca.key
I0120 17:47:50.564239 222240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.key
I0120 17:47:50.564270 222240 certs.go:256] generating profile certs ...
I0120 17:47:50.564516 222240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/client.key
I0120 17:47:50.564700 222240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/apiserver.key.b47539b9
I0120 17:47:50.564795 222240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/proxy-client.key
I0120 17:47:50.565120 222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844.pem (1338 bytes)
W0120 17:47:50.565208 222240 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844_empty.pem, impossibly tiny 0 bytes
I0120 17:47:50.565232 222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem (1679 bytes)
I0120 17:47:50.565274 222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem (1082 bytes)
I0120 17:47:50.565392 222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem (1123 bytes)
I0120 17:47:50.565485 222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem (1679 bytes)
I0120 17:47:50.565823 222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem (1708 bytes)
I0120 17:47:50.566966 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 17:47:50.595967 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0120 17:47:50.627058 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 17:47:50.655285 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 17:47:50.688377 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0120 17:47:50.733678 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0120 17:47:50.773018 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 17:47:50.802874 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 17:47:50.830674 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 17:47:50.860949 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844.pem --> /usr/share/ca-certificates/7844.pem (1338 bytes)
I0120 17:47:50.888360 222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem --> /usr/share/ca-certificates/78442.pem (1708 bytes)
I0120 17:47:50.917473 222240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 17:47:50.936554 222240 ssh_runner.go:195] Run: openssl version
I0120 17:47:50.944138 222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 17:47:50.954739 222240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 17:47:50.958356 222240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 16:58 /usr/share/ca-certificates/minikubeCA.pem
I0120 17:47:50.958456 222240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 17:47:50.965820 222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 17:47:50.975019 222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7844.pem && ln -fs /usr/share/ca-certificates/7844.pem /etc/ssl/certs/7844.pem"
I0120 17:47:50.984902 222240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7844.pem
I0120 17:47:50.988480 222240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:06 /usr/share/ca-certificates/7844.pem
I0120 17:47:50.988549 222240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7844.pem
I0120 17:47:50.995758 222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7844.pem /etc/ssl/certs/51391683.0"
I0120 17:47:51.005635 222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78442.pem && ln -fs /usr/share/ca-certificates/78442.pem /etc/ssl/certs/78442.pem"
I0120 17:47:51.016663 222240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78442.pem
I0120 17:47:51.020795 222240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 17:06 /usr/share/ca-certificates/78442.pem
I0120 17:47:51.020869 222240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78442.pem
I0120 17:47:51.028452 222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78442.pem /etc/ssl/certs/3ec20f2e.0"
I0120 17:47:51.038125 222240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 17:47:51.042090 222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0120 17:47:51.049550 222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0120 17:47:51.057341 222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0120 17:47:51.064856 222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0120 17:47:51.072276 222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0120 17:47:51.079627 222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0120 17:47:51.087042 222240 kubeadm.go:392] StartCluster: {Name:embed-certs-698725 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 17:47:51.087200 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 17:47:51.087292 222240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 17:47:51.127972 222240 cri.go:89] found id: "f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
I0120 17:47:51.127998 222240 cri.go:89] found id: "a249b6a6bd06a920eea275ddf24e32bbdfb772be3581b64a0ec16ff624981de2"
I0120 17:47:51.128004 222240 cri.go:89] found id: "03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
I0120 17:47:51.128008 222240 cri.go:89] found id: "a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
I0120 17:47:51.128011 222240 cri.go:89] found id: "c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
I0120 17:47:51.128016 222240 cri.go:89] found id: "2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
I0120 17:47:51.128019 222240 cri.go:89] found id: "b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
I0120 17:47:51.128022 222240 cri.go:89] found id: "21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
I0120 17:47:51.128026 222240 cri.go:89] found id: ""
I0120 17:47:51.128079 222240 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0120 17:47:51.146118 222240 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T17:47:51Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0120 17:47:51.146232 222240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 17:47:51.156333 222240 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0120 17:47:51.156354 222240 kubeadm.go:593] restartPrimaryControlPlane start ...
I0120 17:47:51.156406 222240 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0120 17:47:51.165984 222240 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0120 17:47:51.166618 222240 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-698725" does not appear in /home/jenkins/minikube-integration/20109-2518/kubeconfig
I0120 17:47:51.166901 222240 kubeconfig.go:62] /home/jenkins/minikube-integration/20109-2518/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-698725" cluster setting kubeconfig missing "embed-certs-698725" context setting]
I0120 17:47:51.167474 222240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/kubeconfig: {Name:mk7eb37afa68734d2ba48fcac1147e4fe5c87411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 17:47:51.168853 222240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0120 17:47:51.179263 222240 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0120 17:47:51.179310 222240 kubeadm.go:597] duration metric: took 22.949431ms to restartPrimaryControlPlane
I0120 17:47:51.179320 222240 kubeadm.go:394] duration metric: took 92.289811ms to StartCluster
I0120 17:47:51.179336 222240 settings.go:142] acquiring lock: {Name:mk1c7d255bd6ff729fb7f0cda8440d084eb0c286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 17:47:51.179502 222240 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20109-2518/kubeconfig
I0120 17:47:51.180779 222240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/kubeconfig: {Name:mk7eb37afa68734d2ba48fcac1147e4fe5c87411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 17:47:51.180992 222240 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 17:47:51.181481 222240 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 17:47:51.181554 222240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-698725"
I0120 17:47:51.181571 222240 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-698725"
W0120 17:47:51.181585 222240 addons.go:247] addon storage-provisioner should already be in state true
I0120 17:47:51.181609 222240 host.go:66] Checking if "embed-certs-698725" exists ...
I0120 17:47:51.182100 222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
I0120 17:47:51.182339 222240 config.go:182] Loaded profile config "embed-certs-698725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:47:51.182515 222240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-698725"
I0120 17:47:51.182538 222240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-698725"
I0120 17:47:51.182844 222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
I0120 17:47:51.183120 222240 addons.go:69] Setting metrics-server=true in profile "embed-certs-698725"
I0120 17:47:51.183182 222240 addons.go:238] Setting addon metrics-server=true in "embed-certs-698725"
W0120 17:47:51.183203 222240 addons.go:247] addon metrics-server should already be in state true
I0120 17:47:51.183258 222240 host.go:66] Checking if "embed-certs-698725" exists ...
I0120 17:47:51.183879 222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
I0120 17:47:51.187028 222240 addons.go:69] Setting dashboard=true in profile "embed-certs-698725"
I0120 17:47:51.187072 222240 addons.go:238] Setting addon dashboard=true in "embed-certs-698725"
W0120 17:47:51.187081 222240 addons.go:247] addon dashboard should already be in state true
I0120 17:47:51.187118 222240 host.go:66] Checking if "embed-certs-698725" exists ...
I0120 17:47:51.187653 222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
I0120 17:47:51.191852 222240 out.go:177] * Verifying Kubernetes components...
I0120 17:47:51.195323 222240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 17:47:51.248151 222240 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 17:47:51.251160 222240 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 17:47:51.251182 222240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 17:47:51.251253 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:51.270235 222240 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 17:47:51.273400 222240 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 17:47:51.277130 222240 addons.go:238] Setting addon default-storageclass=true in "embed-certs-698725"
W0120 17:47:51.277153 222240 addons.go:247] addon default-storageclass should already be in state true
I0120 17:47:51.277177 222240 host.go:66] Checking if "embed-certs-698725" exists ...
I0120 17:47:51.277601 222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
I0120 17:47:51.277815 222240 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 17:47:51.283449 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 17:47:51.283483 222240 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 17:47:51.283557 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:51.284063 222240 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 17:47:51.284078 222240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 17:47:51.284134 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:51.309598 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:51.330575 222240 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 17:47:51.330595 222240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 17:47:51.330670 222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
I0120 17:47:51.357154 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:51.365511 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:51.382832 222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
I0120 17:47:51.409724 222240 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 17:47:51.481912 222240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-698725" to be "Ready" ...
I0120 17:47:51.605075 222240 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 17:47:51.605095 222240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 17:47:51.649299 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 17:47:51.649365 222240 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 17:47:51.669197 222240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 17:47:51.692726 222240 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 17:47:51.692825 222240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 17:47:51.802585 222240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 17:47:51.807299 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 17:47:51.807437 222240 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 17:47:51.828979 222240 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 17:47:51.829085 222240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 17:47:51.864022 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 17:47:51.864125 222240 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0120 17:47:51.957526 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 17:47:51.957604 222240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 17:47:52.069854 222240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 17:47:52.259830 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 17:47:52.259899 222240 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0120 17:47:52.495822 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 17:47:52.495914 222240 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 17:47:52.610042 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 17:47:52.610124 222240 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 17:47:52.648240 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 17:47:52.648417 222240 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0120 17:47:52.695144 222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 17:47:52.695219 222240 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0120 17:47:52.734305 222240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 17:47:51.220240 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:53.731463 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:56.933993 222240 node_ready.go:49] node "embed-certs-698725" has status "Ready":"True"
I0120 17:47:56.934030 222240 node_ready.go:38] duration metric: took 5.452072442s for node "embed-certs-698725" to be "Ready" ...
I0120 17:47:56.934042 222240 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 17:47:56.962943 222240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hpgxx" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.010755 222240 pod_ready.go:93] pod "coredns-668d6bf9bc-hpgxx" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:57.010793 222240 pod_ready.go:82] duration metric: took 47.81453ms for pod "coredns-668d6bf9bc-hpgxx" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.010806 222240 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.030415 222240 pod_ready.go:93] pod "etcd-embed-certs-698725" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:57.030446 222240 pod_ready.go:82] duration metric: took 19.631401ms for pod "etcd-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.030463 222240 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.059418 222240 pod_ready.go:93] pod "kube-apiserver-embed-certs-698725" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:57.059485 222240 pod_ready.go:82] duration metric: took 29.013139ms for pod "kube-apiserver-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.059512 222240 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.081254 222240 pod_ready.go:93] pod "kube-controller-manager-embed-certs-698725" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:57.081331 222240 pod_ready.go:82] duration metric: took 21.787776ms for pod "kube-controller-manager-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.081359 222240 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cxzfl" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.156414 222240 pod_ready.go:93] pod "kube-proxy-cxzfl" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:57.156479 222240 pod_ready.go:82] duration metric: took 75.100014ms for pod "kube-proxy-cxzfl" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.156506 222240 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.538976 222240 pod_ready.go:93] pod "kube-scheduler-embed-certs-698725" in "kube-system" namespace has status "Ready":"True"
I0120 17:47:57.539052 222240 pod_ready.go:82] duration metric: took 382.524773ms for pod "kube-scheduler-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
I0120 17:47:57.539078 222240 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace to be "Ready" ...
I0120 17:47:59.546251 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:00.172426 222240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.369737788s)
I0120 17:48:00.172972 222240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.103017874s)
I0120 17:48:00.173045 222240 addons.go:479] Verifying addon metrics-server=true in "embed-certs-698725"
I0120 17:48:00.173853 222240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.504567299s)
I0120 17:48:00.284550 222240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.550136834s)
I0120 17:48:00.288168 222240 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-698725 addons enable metrics-server
I0120 17:48:00.382565 222240 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
I0120 17:47:56.219975 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:47:58.715969 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:00.716172 216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:00.385911 222240 addons.go:514] duration metric: took 9.20441952s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
I0120 17:48:02.047336 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:01.717465 216535 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
I0120 17:48:01.717491 216535 pod_ready.go:82] duration metric: took 17.007921004s for pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.717503 216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxqgj" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.723321 216535 pod_ready.go:93] pod "kube-proxy-mxqgj" in "kube-system" namespace has status "Ready":"True"
I0120 17:48:01.723396 216535 pod_ready.go:82] duration metric: took 5.87229ms for pod "kube-proxy-mxqgj" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.723409 216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.729329 216535 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
I0120 17:48:01.729356 216535 pod_ready.go:82] duration metric: took 5.938522ms for pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
I0120 17:48:01.729367 216535 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace to be "Ready" ...
I0120 17:48:03.811502 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:04.050574 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:06.059379 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:06.253893 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:08.739025 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:08.547983 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:11.048009 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:11.239058 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:13.736337 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:15.736465 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:13.549834 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:16.046201 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:18.247835 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:20.747201 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:18.545992 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:21.046806 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:23.242774 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:25.735545 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:23.545728 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:25.545844 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:27.736290 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:30.243746 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:28.045721 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:30.050780 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:32.545386 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:32.737127 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:34.737472 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:35.044878 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:37.045790 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:37.243570 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:39.245847 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:39.545372 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:41.545668 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:41.736938 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:44.242652 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:44.047439 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:46.546188 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:46.736378 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:49.243543 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:49.045049 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:51.047621 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:51.243642 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:53.244529 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:55.245129 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:53.545682 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:56.046309 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:57.736190 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:00.244816 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:48:58.546626 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:01.052472 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:02.245766 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:04.295578 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:03.545106 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:05.546016 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:06.736622 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:08.737036 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:10.737207 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:08.045654 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:10.045773 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:12.046443 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:13.242704 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:15.735684 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:14.545187 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:17.045842 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:17.737791 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:20.244523 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:19.046226 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:21.046493 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:22.244600 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:24.735659 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:23.546467 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:25.546509 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:26.736790 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:29.250223 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:28.046111 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:30.048239 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:32.544773 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:31.753850 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:34.236244 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:34.546808 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:37.045755 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:36.243241 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:38.736581 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:40.736828 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:39.544955 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:41.546930 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:43.238843 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:45.736169 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:44.049323 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:46.549194 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:47.736599 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:50.244905 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:49.045360 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:51.545331 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:52.737487 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:54.754561 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:54.045899 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:56.047862 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:57.236641 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:59.238929 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:49:58.548289 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:01.045428 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:01.241741 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:03.242427 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:05.736193 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:03.045861 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:05.046174 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:07.545820 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:07.736416 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:10.240839 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:09.548802 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:12.046389 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:12.244010 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:14.246547 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:14.545609 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:16.545885 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:16.737206 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:19.244440 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:18.546156 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:21.045616 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:21.244729 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:23.736600 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:23.545938 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:26.046091 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:26.244612 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:28.250474 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:30.739819 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:28.545649 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:30.545747 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:33.245363 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:35.737773 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:33.049985 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:35.058335 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:37.546305 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:37.742221 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:40.237488 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:40.047483 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:42.544937 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:42.738257 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:45.239382 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:44.545865 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:47.045906 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:47.736272 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:50.236202 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:49.046122 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:51.046239 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:52.239206 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:54.244758 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:53.545581 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:55.545829 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:56.736346 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:59.237672 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:50:58.046288 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:00.066802 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:02.545102 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:01.244367 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:03.736783 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:05.737354 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:04.545959 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:07.046059 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:08.235650 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:10.237001 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:09.049918 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:11.545323 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:12.237848 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:14.240863 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:14.045756 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:16.046288 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:16.243349 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:18.737611 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:18.046831 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:20.052058 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:22.546075 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:21.244639 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:23.735945 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:25.045178 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:27.046055 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:26.242287 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:28.735482 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:30.736321 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:29.545870 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:31.546237 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:32.736991 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:35.236754 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:34.046112 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:36.048791 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:37.244823 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:39.735311 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:38.546060 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:41.045351 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:41.735810 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:43.736169 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:45.742400 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:43.046135 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:45.047911 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:47.545521 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:48.243218 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:50.244231 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:49.545593 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:52.045901 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:52.244707 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:54.248009 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:54.545986 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:57.045628 222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:57.545573 222240 pod_ready.go:82] duration metric: took 4m0.006469687s for pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace to be "Ready" ...
E0120 17:51:57.545600 222240 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 17:51:57.545610 222240 pod_ready.go:39] duration metric: took 4m0.611558284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 17:51:57.545626 222240 api_server.go:52] waiting for apiserver process to appear ...
I0120 17:51:57.545656 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 17:51:57.545719 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 17:51:57.617009 222240 cri.go:89] found id: "05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
I0120 17:51:57.617036 222240 cri.go:89] found id: "c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
I0120 17:51:57.617041 222240 cri.go:89] found id: ""
I0120 17:51:57.617048 222240 logs.go:282] 2 containers: [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6]
I0120 17:51:57.617122 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.620712 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.624582 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 17:51:57.624654 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 17:51:57.676345 222240 cri.go:89] found id: "39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
I0120 17:51:57.676366 222240 cri.go:89] found id: "21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
I0120 17:51:57.676371 222240 cri.go:89] found id: ""
I0120 17:51:57.676378 222240 logs.go:282] 2 containers: [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5]
I0120 17:51:57.676439 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.680282 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.683581 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 17:51:57.683698 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 17:51:57.720584 222240 cri.go:89] found id: "76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
I0120 17:51:57.720646 222240 cri.go:89] found id: "f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
I0120 17:51:57.720657 222240 cri.go:89] found id: ""
I0120 17:51:57.720667 222240 logs.go:282] 2 containers: [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6]
I0120 17:51:57.720731 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.730284 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.737537 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 17:51:57.737615 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 17:51:57.776517 222240 cri.go:89] found id: "a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
I0120 17:51:57.776539 222240 cri.go:89] found id: "2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
I0120 17:51:57.776544 222240 cri.go:89] found id: ""
I0120 17:51:57.776552 222240 logs.go:282] 2 containers: [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f]
I0120 17:51:57.776606 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.779969 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.783102 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 17:51:57.783190 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 17:51:57.836806 222240 cri.go:89] found id: "b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
I0120 17:51:57.836834 222240 cri.go:89] found id: "a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
I0120 17:51:57.836840 222240 cri.go:89] found id: ""
I0120 17:51:57.836847 222240 logs.go:282] 2 containers: [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166]
I0120 17:51:57.836904 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.840666 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.844319 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 17:51:57.844393 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 17:51:57.894058 222240 cri.go:89] found id: "28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
I0120 17:51:57.894082 222240 cri.go:89] found id: "b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
I0120 17:51:57.894094 222240 cri.go:89] found id: ""
I0120 17:51:57.894102 222240 logs.go:282] 2 containers: [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a]
I0120 17:51:57.894165 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.898124 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.902319 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 17:51:57.902436 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 17:51:57.952198 222240 cri.go:89] found id: "f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
I0120 17:51:57.952230 222240 cri.go:89] found id: "03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
I0120 17:51:57.952236 222240 cri.go:89] found id: ""
I0120 17:51:57.952244 222240 logs.go:282] 2 containers: [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f]
I0120 17:51:57.952316 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.956592 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:57.960234 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 17:51:57.960332 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 17:51:58.013379 222240 cri.go:89] found id: "d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
I0120 17:51:58.013413 222240 cri.go:89] found id: ""
I0120 17:51:58.013422 222240 logs.go:282] 1 containers: [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8]
I0120 17:51:58.013521 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:56.737674 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:59.241838 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:51:58.017708 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 17:51:58.017785 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 17:51:58.066449 222240 cri.go:89] found id: "edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
I0120 17:51:58.066475 222240 cri.go:89] found id: "68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
I0120 17:51:58.066481 222240 cri.go:89] found id: ""
I0120 17:51:58.066489 222240 logs.go:282] 2 containers: [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7]
I0120 17:51:58.066548 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:58.070700 222240 ssh_runner.go:195] Run: which crictl
I0120 17:51:58.074690 222240 logs.go:123] Gathering logs for kube-scheduler [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5] ...
I0120 17:51:58.074719 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
I0120 17:51:58.126524 222240 logs.go:123] Gathering logs for kindnet [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0] ...
I0120 17:51:58.126555 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
I0120 17:51:58.173512 222240 logs.go:123] Gathering logs for storage-provisioner [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee] ...
I0120 17:51:58.173542 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
I0120 17:51:58.221076 222240 logs.go:123] Gathering logs for storage-provisioner [68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7] ...
I0120 17:51:58.221108 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
I0120 17:51:58.290668 222240 logs.go:123] Gathering logs for container status ...
I0120 17:51:58.290697 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 17:51:58.348834 222240 logs.go:123] Gathering logs for etcd [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c] ...
I0120 17:51:58.348866 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
I0120 17:51:58.398407 222240 logs.go:123] Gathering logs for coredns [f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6] ...
I0120 17:51:58.398440 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
I0120 17:51:58.439843 222240 logs.go:123] Gathering logs for kube-scheduler [2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f] ...
I0120 17:51:58.439871 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
I0120 17:51:58.503321 222240 logs.go:123] Gathering logs for kube-controller-manager [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa] ...
I0120 17:51:58.503389 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
I0120 17:51:58.585533 222240 logs.go:123] Gathering logs for kindnet [03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f] ...
I0120 17:51:58.585565 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
I0120 17:51:58.634511 222240 logs.go:123] Gathering logs for containerd ...
I0120 17:51:58.634535 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 17:51:58.714218 222240 logs.go:123] Gathering logs for kubelet ...
I0120 17:51:58.714256 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 17:51:58.806521 222240 logs.go:123] Gathering logs for etcd [21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5] ...
I0120 17:51:58.806564 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
I0120 17:51:58.858968 222240 logs.go:123] Gathering logs for kube-proxy [a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166] ...
I0120 17:51:58.859000 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
I0120 17:51:58.907920 222240 logs.go:123] Gathering logs for kubernetes-dashboard [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8] ...
I0120 17:51:58.907954 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
I0120 17:51:58.957809 222240 logs.go:123] Gathering logs for kube-apiserver [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2] ...
I0120 17:51:58.957836 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
I0120 17:51:59.014674 222240 logs.go:123] Gathering logs for kube-apiserver [c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6] ...
I0120 17:51:59.014709 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
I0120 17:51:59.066428 222240 logs.go:123] Gathering logs for coredns [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37] ...
I0120 17:51:59.066465 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
I0120 17:51:59.113438 222240 logs.go:123] Gathering logs for kube-proxy [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f] ...
I0120 17:51:59.113467 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
I0120 17:51:59.153989 222240 logs.go:123] Gathering logs for kube-controller-manager [b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a] ...
I0120 17:51:59.154018 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
I0120 17:51:59.221680 222240 logs.go:123] Gathering logs for dmesg ...
I0120 17:51:59.221715 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 17:51:59.244909 222240 logs.go:123] Gathering logs for describe nodes ...
I0120 17:51:59.244938 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 17:52:01.987021 222240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 17:52:02.010001 222240 api_server.go:72] duration metric: took 4m10.828971389s to wait for apiserver process to appear ...
I0120 17:52:02.010030 222240 api_server.go:88] waiting for apiserver healthz status ...
I0120 17:52:02.010071 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 17:52:02.010138 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 17:52:02.093839 222240 cri.go:89] found id: "05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
I0120 17:52:02.093863 222240 cri.go:89] found id: "c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
I0120 17:52:02.093868 222240 cri.go:89] found id: ""
I0120 17:52:02.093875 222240 logs.go:282] 2 containers: [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6]
I0120 17:52:02.093931 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.099297 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.103702 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 17:52:02.103787 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 17:52:02.165550 222240 cri.go:89] found id: "39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
I0120 17:52:02.165573 222240 cri.go:89] found id: "21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
I0120 17:52:02.165579 222240 cri.go:89] found id: ""
I0120 17:52:02.165586 222240 logs.go:282] 2 containers: [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5]
I0120 17:52:02.165644 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.172628 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.177430 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 17:52:02.177507 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 17:52:02.250225 222240 cri.go:89] found id: "76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
I0120 17:52:02.250250 222240 cri.go:89] found id: "f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
I0120 17:52:02.250255 222240 cri.go:89] found id: ""
I0120 17:52:02.250262 222240 logs.go:282] 2 containers: [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6]
I0120 17:52:02.250319 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.254841 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.259738 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 17:52:02.259813 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 17:52:02.318546 222240 cri.go:89] found id: "a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
I0120 17:52:02.318566 222240 cri.go:89] found id: "2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
I0120 17:52:02.318572 222240 cri.go:89] found id: ""
I0120 17:52:02.318579 222240 logs.go:282] 2 containers: [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f]
I0120 17:52:02.318634 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.322902 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.327285 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 17:52:02.327378 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 17:52:02.392171 222240 cri.go:89] found id: "b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
I0120 17:52:02.392192 222240 cri.go:89] found id: "a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
I0120 17:52:02.392196 222240 cri.go:89] found id: ""
I0120 17:52:02.392204 222240 logs.go:282] 2 containers: [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166]
I0120 17:52:02.392279 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.396733 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.400973 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 17:52:02.401059 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 17:52:02.467222 222240 cri.go:89] found id: "28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
I0120 17:52:02.467243 222240 cri.go:89] found id: "b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
I0120 17:52:02.467248 222240 cri.go:89] found id: ""
I0120 17:52:02.467255 222240 logs.go:282] 2 containers: [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a]
I0120 17:52:02.467312 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.471371 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.475281 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 17:52:02.475502 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 17:52:02.525378 222240 cri.go:89] found id: "f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
I0120 17:52:02.525398 222240 cri.go:89] found id: "03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
I0120 17:52:02.525404 222240 cri.go:89] found id: ""
I0120 17:52:02.525411 222240 logs.go:282] 2 containers: [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f]
I0120 17:52:02.525466 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.529520 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.534115 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 17:52:02.534191 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 17:52:02.585681 222240 cri.go:89] found id: "d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
I0120 17:52:02.585706 222240 cri.go:89] found id: ""
I0120 17:52:02.585714 222240 logs.go:282] 1 containers: [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8]
I0120 17:52:02.585781 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.590016 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 17:52:02.590093 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 17:52:02.640880 222240 cri.go:89] found id: "edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
I0120 17:52:02.640904 222240 cri.go:89] found id: "68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
I0120 17:52:02.640909 222240 cri.go:89] found id: ""
I0120 17:52:02.640916 222240 logs.go:282] 2 containers: [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7]
I0120 17:52:02.640972 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.650887 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.654957 222240 logs.go:123] Gathering logs for kube-apiserver [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2] ...
I0120 17:52:02.654998 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
I0120 17:52:02.739262 222240 logs.go:123] Gathering logs for kube-controller-manager [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa] ...
I0120 17:52:02.739296 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
I0120 17:52:02.811604 222240 logs.go:123] Gathering logs for storage-provisioner [68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7] ...
I0120 17:52:02.811641 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
I0120 17:52:02.858689 222240 logs.go:123] Gathering logs for containerd ...
I0120 17:52:02.858718 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 17:52:02.947820 222240 logs.go:123] Gathering logs for kubelet ...
I0120 17:52:02.947861 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 17:52:01.244283 216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
I0120 17:52:01.736777 216535 pod_ready.go:82] duration metric: took 4m0.007395127s for pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace to be "Ready" ...
E0120 17:52:01.736846 216535 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 17:52:01.736870 216535 pod_ready.go:39] duration metric: took 5m28.474374205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 17:52:01.736899 216535 api_server.go:52] waiting for apiserver process to appear ...
I0120 17:52:01.736964 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 17:52:01.737053 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 17:52:01.781253 216535 cri.go:89] found id: "f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
I0120 17:52:01.781321 216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:52:01.781341 216535 cri.go:89] found id: ""
I0120 17:52:01.781356 216535 logs.go:282] 2 containers: [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e]
I0120 17:52:01.781432 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.785393 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.788792 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 17:52:01.788862 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 17:52:01.833834 216535 cri.go:89] found id: "17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
I0120 17:52:01.833869 216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:52:01.833902 216535 cri.go:89] found id: ""
I0120 17:52:01.833910 216535 logs.go:282] 2 containers: [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec]
I0120 17:52:01.833990 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.838990 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.843467 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 17:52:01.843556 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 17:52:01.886764 216535 cri.go:89] found id: "583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
I0120 17:52:01.886856 216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:52:01.886877 216535 cri.go:89] found id: ""
I0120 17:52:01.886908 216535 logs.go:282] 2 containers: [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc]
I0120 17:52:01.886983 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.891011 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.894775 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 17:52:01.894856 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 17:52:01.949896 216535 cri.go:89] found id: "2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
I0120 17:52:01.949920 216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:52:01.949925 216535 cri.go:89] found id: ""
I0120 17:52:01.949933 216535 logs.go:282] 2 containers: [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90]
I0120 17:52:01.949992 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.954296 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:01.958371 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 17:52:01.958506 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 17:52:02.018621 216535 cri.go:89] found id: "dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
I0120 17:52:02.018645 216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:52:02.018650 216535 cri.go:89] found id: ""
I0120 17:52:02.018657 216535 logs.go:282] 2 containers: [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42]
I0120 17:52:02.018714 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.023690 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.028696 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 17:52:02.028860 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 17:52:02.096051 216535 cri.go:89] found id: "c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
I0120 17:52:02.096073 216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:52:02.096078 216535 cri.go:89] found id: ""
I0120 17:52:02.096085 216535 logs.go:282] 2 containers: [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f]
I0120 17:52:02.096149 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.100993 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.106917 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 17:52:02.106990 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 17:52:02.174049 216535 cri.go:89] found id: "6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
I0120 17:52:02.174080 216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:52:02.174086 216535 cri.go:89] found id: ""
I0120 17:52:02.174093 216535 logs.go:282] 2 containers: [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f]
I0120 17:52:02.174145 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.179127 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.184826 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 17:52:02.184901 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 17:52:02.254018 216535 cri.go:89] found id: "9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
I0120 17:52:02.254041 216535 cri.go:89] found id: ""
I0120 17:52:02.254049 216535 logs.go:282] 1 containers: [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8]
I0120 17:52:02.254122 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.260217 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 17:52:02.260276 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 17:52:02.316256 216535 cri.go:89] found id: "027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
I0120 17:52:02.316280 216535 cri.go:89] found id: "91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
I0120 17:52:02.316286 216535 cri.go:89] found id: ""
I0120 17:52:02.316293 216535 logs.go:282] 2 containers: [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd]
I0120 17:52:02.316352 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.321766 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:02.327502 216535 logs.go:123] Gathering logs for dmesg ...
I0120 17:52:02.327525 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 17:52:02.343747 216535 logs.go:123] Gathering logs for describe nodes ...
I0120 17:52:02.343778 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 17:52:02.674989 216535 logs.go:123] Gathering logs for kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] ...
I0120 17:52:02.675019 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:52:02.739409 216535 logs.go:123] Gathering logs for kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] ...
I0120 17:52:02.739429 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
I0120 17:52:02.805987 216535 logs.go:123] Gathering logs for kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] ...
I0120 17:52:02.806072 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:52:02.862091 216535 logs.go:123] Gathering logs for kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] ...
I0120 17:52:02.862117 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
I0120 17:52:02.952148 216535 logs.go:123] Gathering logs for storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] ...
I0120 17:52:02.952223 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
I0120 17:52:03.020765 216535 logs.go:123] Gathering logs for container status ...
I0120 17:52:03.020815 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 17:52:03.090382 216535 logs.go:123] Gathering logs for kubelet ...
I0120 17:52:03.090580 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 17:52:03.161589 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:34 old-k8s-version-145659 kubelet[662]: E0120 17:46:34.880251 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.161853 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:35 old-k8s-version-145659 kubelet[662]: E0120 17:46:35.605048 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.165125 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:50 old-k8s-version-145659 kubelet[662]: E0120 17:46:50.413085 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.167727 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:03 old-k8s-version-145659 kubelet[662]: E0120 17:47:03.698813 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.167958 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.404037 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.168311 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.706245 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.168784 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.711644 662 pod_workers.go:191] Error syncing pod ceb78d8f-604f-44e7-a643-6a7788c747ae ("storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"
W0120 17:52:03.169139 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.712757 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.170224 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:18 old-k8s-version-145659 kubelet[662]: E0120 17:47:18.760650 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.172926 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:19 old-k8s-version-145659 kubelet[662]: E0120 17:47:19.413053 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.173303 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:23 old-k8s-version-145659 kubelet[662]: E0120 17:47:23.877153 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.173514 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:31 old-k8s-version-145659 kubelet[662]: E0120 17:47:31.403908 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.173865 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:36 old-k8s-version-145659 kubelet[662]: E0120 17:47:36.403402 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.174073 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:46 old-k8s-version-145659 kubelet[662]: E0120 17:47:46.412253 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.174688 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:48 old-k8s-version-145659 kubelet[662]: E0120 17:47:48.845203 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.175052 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:53 old-k8s-version-145659 kubelet[662]: E0120 17:47:53.876712 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.175261 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:58 old-k8s-version-145659 kubelet[662]: E0120 17:47:58.411076 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.175632 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:06 old-k8s-version-145659 kubelet[662]: E0120 17:48:06.403375 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.178118 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:12 old-k8s-version-145659 kubelet[662]: E0120 17:48:12.422259 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.178583 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:21 old-k8s-version-145659 kubelet[662]: E0120 17:48:21.403254 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.178770 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:25 old-k8s-version-145659 kubelet[662]: E0120 17:48:25.404070 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.179381 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:34 old-k8s-version-145659 kubelet[662]: E0120 17:48:34.988709 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.179564 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:39 old-k8s-version-145659 kubelet[662]: E0120 17:48:39.403769 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.179889 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:43 old-k8s-version-145659 kubelet[662]: E0120 17:48:43.877519 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.180070 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:53 old-k8s-version-145659 kubelet[662]: E0120 17:48:53.403792 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.180396 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:58 old-k8s-version-145659 kubelet[662]: E0120 17:48:58.408685 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.180579 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:06 old-k8s-version-145659 kubelet[662]: E0120 17:49:06.403734 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.180905 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:11 old-k8s-version-145659 kubelet[662]: E0120 17:49:11.403959 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.181086 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:18 old-k8s-version-145659 kubelet[662]: E0120 17:49:18.408125 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.181407 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:24 old-k8s-version-145659 kubelet[662]: E0120 17:49:24.407972 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.181587 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:30 old-k8s-version-145659 kubelet[662]: E0120 17:49:30.404331 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.181909 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:37 old-k8s-version-145659 kubelet[662]: E0120 17:49:37.403265 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.184453 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:45 old-k8s-version-145659 kubelet[662]: E0120 17:49:45.414508 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:03.184816 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:48 old-k8s-version-145659 kubelet[662]: E0120 17:49:48.403936 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.185031 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:00 old-k8s-version-145659 kubelet[662]: E0120 17:50:00.404116 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.185681 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:04 old-k8s-version-145659 kubelet[662]: E0120 17:50:04.268511 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.185896 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:12 old-k8s-version-145659 kubelet[662]: E0120 17:50:12.407685 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.186251 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:13 old-k8s-version-145659 kubelet[662]: E0120 17:50:13.876917 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.186463 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:25 old-k8s-version-145659 kubelet[662]: E0120 17:50:25.403750 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.186830 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:28 old-k8s-version-145659 kubelet[662]: E0120 17:50:28.405640 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.187051 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:40 old-k8s-version-145659 kubelet[662]: E0120 17:50:40.403822 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.187407 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.187689 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.188047 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.188255 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.188613 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.188828 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.189195 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.189403 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.189758 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.189969 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.190324 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:03.190536 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:03.190894 216535 logs.go:138] Found kubelet problem: Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
I0120 17:52:03.190919 216535 logs.go:123] Gathering logs for etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] ...
I0120 17:52:03.190947 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
I0120 17:52:03.259910 216535 logs.go:123] Gathering logs for kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] ...
I0120 17:52:03.259991 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
I0120 17:52:03.317942 216535 logs.go:123] Gathering logs for kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] ...
I0120 17:52:03.318013 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:52:03.380525 216535 logs.go:123] Gathering logs for kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] ...
I0120 17:52:03.380608 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:52:03.453396 216535 logs.go:123] Gathering logs for coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] ...
I0120 17:52:03.453442 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
I0120 17:52:03.506945 216535 logs.go:123] Gathering logs for coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] ...
I0120 17:52:03.506974 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:52:03.555548 216535 logs.go:123] Gathering logs for kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] ...
I0120 17:52:03.555628 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
I0120 17:52:03.674894 216535 logs.go:123] Gathering logs for storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] ...
I0120 17:52:03.674971 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
I0120 17:52:03.746584 216535 logs.go:123] Gathering logs for containerd ...
I0120 17:52:03.746608 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 17:52:03.830076 216535 logs.go:123] Gathering logs for kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] ...
I0120 17:52:03.830148 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
I0120 17:52:03.938308 216535 logs.go:123] Gathering logs for etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] ...
I0120 17:52:03.938397 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:52:04.023242 216535 logs.go:123] Gathering logs for kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] ...
I0120 17:52:04.023376 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
I0120 17:52:04.093186 216535 logs.go:123] Gathering logs for kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] ...
I0120 17:52:04.093218 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:52:04.203549 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:52:04.203705 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 17:52:04.203798 216535 out.go:270] X Problems detected in kubelet:
W0120 17:52:04.203843 216535 out.go:270] Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:04.203889 216535 out.go:270] Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:04.203925 216535 out.go:270] Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:04.203955 216535 out.go:270] Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:04.203988 216535 out.go:270] Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
I0120 17:52:04.204019 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:52:04.204048 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:52:03.040765 222240 logs.go:123] Gathering logs for describe nodes ...
I0120 17:52:03.040864 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 17:52:03.208364 222240 logs.go:123] Gathering logs for kube-scheduler [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5] ...
I0120 17:52:03.208447 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
I0120 17:52:03.271218 222240 logs.go:123] Gathering logs for kindnet [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0] ...
I0120 17:52:03.271250 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
I0120 17:52:03.330849 222240 logs.go:123] Gathering logs for etcd [21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5] ...
I0120 17:52:03.330882 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
I0120 17:52:03.402129 222240 logs.go:123] Gathering logs for coredns [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37] ...
I0120 17:52:03.402164 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
I0120 17:52:03.452417 222240 logs.go:123] Gathering logs for coredns [f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6] ...
I0120 17:52:03.452448 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
I0120 17:52:03.507073 222240 logs.go:123] Gathering logs for kube-scheduler [2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f] ...
I0120 17:52:03.507096 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
I0120 17:52:03.578981 222240 logs.go:123] Gathering logs for kube-proxy [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f] ...
I0120 17:52:03.579015 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
I0120 17:52:03.641337 222240 logs.go:123] Gathering logs for kindnet [03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f] ...
I0120 17:52:03.641362 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
I0120 17:52:03.712323 222240 logs.go:123] Gathering logs for dmesg ...
I0120 17:52:03.712353 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 17:52:03.728792 222240 logs.go:123] Gathering logs for kube-apiserver [c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6] ...
I0120 17:52:03.728827 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
I0120 17:52:03.822833 222240 logs.go:123] Gathering logs for kubernetes-dashboard [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8] ...
I0120 17:52:03.822869 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
I0120 17:52:03.881083 222240 logs.go:123] Gathering logs for kube-controller-manager [b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a] ...
I0120 17:52:03.881120 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
I0120 17:52:03.996717 222240 logs.go:123] Gathering logs for storage-provisioner [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee] ...
I0120 17:52:03.996798 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
I0120 17:52:04.052407 222240 logs.go:123] Gathering logs for container status ...
I0120 17:52:04.052485 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 17:52:04.117749 222240 logs.go:123] Gathering logs for etcd [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c] ...
I0120 17:52:04.117833 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
I0120 17:52:04.177511 222240 logs.go:123] Gathering logs for kube-proxy [a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166] ...
I0120 17:52:04.177544 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
I0120 17:52:06.729219 222240 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0120 17:52:06.738133 222240 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0120 17:52:06.739310 222240 api_server.go:141] control plane version: v1.32.0
I0120 17:52:06.739337 222240 api_server.go:131] duration metric: took 4.729299032s to wait for apiserver health ...
I0120 17:52:06.739387 222240 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 17:52:06.739413 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 17:52:06.739473 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 17:52:06.780997 222240 cri.go:89] found id: "05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
I0120 17:52:06.781019 222240 cri.go:89] found id: "c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
I0120 17:52:06.781025 222240 cri.go:89] found id: ""
I0120 17:52:06.781032 222240 logs.go:282] 2 containers: [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6]
I0120 17:52:06.781100 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.785102 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.789052 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 17:52:06.789149 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 17:52:06.828043 222240 cri.go:89] found id: "39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
I0120 17:52:06.828066 222240 cri.go:89] found id: "21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
I0120 17:52:06.828071 222240 cri.go:89] found id: ""
I0120 17:52:06.828079 222240 logs.go:282] 2 containers: [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5]
I0120 17:52:06.828142 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.831797 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.835573 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 17:52:06.835722 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 17:52:06.876754 222240 cri.go:89] found id: "76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
I0120 17:52:06.876778 222240 cri.go:89] found id: "f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
I0120 17:52:06.876783 222240 cri.go:89] found id: ""
I0120 17:52:06.876790 222240 logs.go:282] 2 containers: [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6]
I0120 17:52:06.876846 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.880582 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.884412 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 17:52:06.884525 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 17:52:06.928663 222240 cri.go:89] found id: "a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
I0120 17:52:06.928728 222240 cri.go:89] found id: "2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
I0120 17:52:06.928746 222240 cri.go:89] found id: ""
I0120 17:52:06.928768 222240 logs.go:282] 2 containers: [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f]
I0120 17:52:06.928854 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.932910 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.937039 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 17:52:06.937164 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 17:52:06.985011 222240 cri.go:89] found id: "b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
I0120 17:52:06.985083 222240 cri.go:89] found id: "a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
I0120 17:52:06.985101 222240 cri.go:89] found id: ""
I0120 17:52:06.985123 222240 logs.go:282] 2 containers: [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166]
I0120 17:52:06.985208 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.988821 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:06.992483 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 17:52:06.992560 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 17:52:07.035044 222240 cri.go:89] found id: "28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
I0120 17:52:07.035115 222240 cri.go:89] found id: "b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
I0120 17:52:07.035146 222240 cri.go:89] found id: ""
I0120 17:52:07.035170 222240 logs.go:282] 2 containers: [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a]
I0120 17:52:07.035259 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:07.039075 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:07.042498 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 17:52:07.042570 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 17:52:07.079877 222240 cri.go:89] found id: "f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
I0120 17:52:07.079951 222240 cri.go:89] found id: "03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
I0120 17:52:07.079970 222240 cri.go:89] found id: ""
I0120 17:52:07.079984 222240 logs.go:282] 2 containers: [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f]
I0120 17:52:07.080056 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:07.086332 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:07.092807 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 17:52:07.092925 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 17:52:07.138181 222240 cri.go:89] found id: "edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
I0120 17:52:07.138204 222240 cri.go:89] found id: "68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
I0120 17:52:07.138209 222240 cri.go:89] found id: ""
I0120 17:52:07.138216 222240 logs.go:282] 2 containers: [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7]
I0120 17:52:07.138278 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:07.142180 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:07.145487 222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 17:52:07.145581 222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 17:52:07.181159 222240 cri.go:89] found id: "d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
I0120 17:52:07.181182 222240 cri.go:89] found id: ""
I0120 17:52:07.181189 222240 logs.go:282] 1 containers: [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8]
I0120 17:52:07.181262 222240 ssh_runner.go:195] Run: which crictl
I0120 17:52:07.185182 222240 logs.go:123] Gathering logs for etcd [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c] ...
I0120 17:52:07.185209 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
I0120 17:52:07.228879 222240 logs.go:123] Gathering logs for coredns [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37] ...
I0120 17:52:07.228910 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
I0120 17:52:07.275233 222240 logs.go:123] Gathering logs for coredns [f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6] ...
I0120 17:52:07.275277 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
I0120 17:52:07.317238 222240 logs.go:123] Gathering logs for kube-proxy [a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166] ...
I0120 17:52:07.317274 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
I0120 17:52:07.356737 222240 logs.go:123] Gathering logs for kube-controller-manager [b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a] ...
I0120 17:52:07.356763 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
I0120 17:52:07.432622 222240 logs.go:123] Gathering logs for kindnet [03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f] ...
I0120 17:52:07.432657 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
I0120 17:52:07.485007 222240 logs.go:123] Gathering logs for storage-provisioner [68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7] ...
I0120 17:52:07.485035 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
I0120 17:52:07.530703 222240 logs.go:123] Gathering logs for kube-apiserver [c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6] ...
I0120 17:52:07.530738 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
I0120 17:52:07.604556 222240 logs.go:123] Gathering logs for containerd ...
I0120 17:52:07.604592 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 17:52:07.676302 222240 logs.go:123] Gathering logs for kubernetes-dashboard [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8] ...
I0120 17:52:07.676345 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
I0120 17:52:07.724747 222240 logs.go:123] Gathering logs for kube-scheduler [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5] ...
I0120 17:52:07.724775 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
I0120 17:52:07.764536 222240 logs.go:123] Gathering logs for kube-scheduler [2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f] ...
I0120 17:52:07.764564 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
I0120 17:52:07.821815 222240 logs.go:123] Gathering logs for kube-controller-manager [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa] ...
I0120 17:52:07.821850 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
I0120 17:52:07.903863 222240 logs.go:123] Gathering logs for kindnet [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0] ...
I0120 17:52:07.903898 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
I0120 17:52:07.953613 222240 logs.go:123] Gathering logs for etcd [21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5] ...
I0120 17:52:07.953642 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
I0120 17:52:08.011222 222240 logs.go:123] Gathering logs for kube-apiserver [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2] ...
I0120 17:52:08.011260 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
I0120 17:52:08.081563 222240 logs.go:123] Gathering logs for kube-proxy [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f] ...
I0120 17:52:08.081596 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
I0120 17:52:08.126297 222240 logs.go:123] Gathering logs for storage-provisioner [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee] ...
I0120 17:52:08.126336 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
I0120 17:52:08.168888 222240 logs.go:123] Gathering logs for dmesg ...
I0120 17:52:08.168917 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 17:52:08.192653 222240 logs.go:123] Gathering logs for describe nodes ...
I0120 17:52:08.192684 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 17:52:08.344570 222240 logs.go:123] Gathering logs for container status ...
I0120 17:52:08.344601 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 17:52:08.390944 222240 logs.go:123] Gathering logs for kubelet ...
I0120 17:52:08.390973 222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 17:52:10.983596 222240 system_pods.go:59] 9 kube-system pods found
I0120 17:52:10.983687 222240 system_pods.go:61] "coredns-668d6bf9bc-hpgxx" [aa92c9f5-e893-4de3-96b0-ca01664fffdb] Running
I0120 17:52:10.983709 222240 system_pods.go:61] "etcd-embed-certs-698725" [04251eb0-9233-4252-a36f-cb9982b6cf58] Running
I0120 17:52:10.983725 222240 system_pods.go:61] "kindnet-7bpzp" [5f1ef73a-3e79-4e00-ab0d-3fa04bafcf4d] Running
I0120 17:52:10.983743 222240 system_pods.go:61] "kube-apiserver-embed-certs-698725" [1eff48d5-cec4-493a-9408-49a0db22ad25] Running
I0120 17:52:10.983749 222240 system_pods.go:61] "kube-controller-manager-embed-certs-698725" [0d662fa3-2c7c-4a82-9de7-1a220a569b38] Running
I0120 17:52:10.983762 222240 system_pods.go:61] "kube-proxy-cxzfl" [b77e79d8-c097-401e-a08c-b1338305f9eb] Running
I0120 17:52:10.983777 222240 system_pods.go:61] "kube-scheduler-embed-certs-698725" [3639200c-a355-409a-9dbb-6298c975ff23] Running
I0120 17:52:10.983786 222240 system_pods.go:61] "metrics-server-f79f97bbb-44zkt" [5d7d7a02-93d2-460c-9bf9-0716128b06d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 17:52:10.983800 222240 system_pods.go:61] "storage-provisioner" [94c17bca-4ede-44c1-a68c-c742181c749f] Running
I0120 17:52:10.983808 222240 system_pods.go:74] duration metric: took 4.244414285s to wait for pod list to return data ...
I0120 17:52:10.983819 222240 default_sa.go:34] waiting for default service account to be created ...
I0120 17:52:10.987628 222240 default_sa.go:45] found service account: "default"
I0120 17:52:10.987655 222240 default_sa.go:55] duration metric: took 3.830033ms for default service account to be created ...
I0120 17:52:10.987664 222240 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 17:52:10.993748 222240 system_pods.go:87] 9 kube-system pods found
I0120 17:52:10.997144 222240 system_pods.go:105] "coredns-668d6bf9bc-hpgxx" [aa92c9f5-e893-4de3-96b0-ca01664fffdb] Running
I0120 17:52:10.997167 222240 system_pods.go:105] "etcd-embed-certs-698725" [04251eb0-9233-4252-a36f-cb9982b6cf58] Running
I0120 17:52:10.997173 222240 system_pods.go:105] "kindnet-7bpzp" [5f1ef73a-3e79-4e00-ab0d-3fa04bafcf4d] Running
I0120 17:52:10.997179 222240 system_pods.go:105] "kube-apiserver-embed-certs-698725" [1eff48d5-cec4-493a-9408-49a0db22ad25] Running
I0120 17:52:10.997184 222240 system_pods.go:105] "kube-controller-manager-embed-certs-698725" [0d662fa3-2c7c-4a82-9de7-1a220a569b38] Running
I0120 17:52:10.997190 222240 system_pods.go:105] "kube-proxy-cxzfl" [b77e79d8-c097-401e-a08c-b1338305f9eb] Running
I0120 17:52:10.997195 222240 system_pods.go:105] "kube-scheduler-embed-certs-698725" [3639200c-a355-409a-9dbb-6298c975ff23] Running
I0120 17:52:10.997204 222240 system_pods.go:105] "metrics-server-f79f97bbb-44zkt" [5d7d7a02-93d2-460c-9bf9-0716128b06d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 17:52:10.997210 222240 system_pods.go:105] "storage-provisioner" [94c17bca-4ede-44c1-a68c-c742181c749f] Running
I0120 17:52:10.997219 222240 system_pods.go:147] duration metric: took 9.549462ms to wait for k8s-apps to be running ...
I0120 17:52:10.997229 222240 system_svc.go:44] waiting for kubelet service to be running ....
I0120 17:52:10.997288 222240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 17:52:11.012272 222240 system_svc.go:56] duration metric: took 15.033469ms WaitForService to wait for kubelet
I0120 17:52:11.012301 222240 kubeadm.go:582] duration metric: took 4m19.831275097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 17:52:11.012321 222240 node_conditions.go:102] verifying NodePressure condition ...
I0120 17:52:11.015569 222240 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0120 17:52:11.015603 222240 node_conditions.go:123] node cpu capacity is 2
I0120 17:52:11.015617 222240 node_conditions.go:105] duration metric: took 3.290494ms to run NodePressure ...
I0120 17:52:11.015630 222240 start.go:241] waiting for startup goroutines ...
I0120 17:52:11.015638 222240 start.go:246] waiting for cluster config update ...
I0120 17:52:11.015649 222240 start.go:255] writing updated cluster config ...
I0120 17:52:11.015968 222240 ssh_runner.go:195] Run: rm -f paused
I0120 17:52:11.079702 222240 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
I0120 17:52:11.084877 222240 out.go:177] * Done! kubectl is now configured to use "embed-certs-698725" cluster and "default" namespace by default
I0120 17:52:14.204540 216535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 17:52:14.216885 216535 api_server.go:72] duration metric: took 5m59.990640844s to wait for apiserver process to appear ...
I0120 17:52:14.216913 216535 api_server.go:88] waiting for apiserver healthz status ...
I0120 17:52:14.216952 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 17:52:14.217012 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 17:52:14.275816 216535 cri.go:89] found id: "f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
I0120 17:52:14.275838 216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:52:14.275843 216535 cri.go:89] found id: ""
I0120 17:52:14.275850 216535 logs.go:282] 2 containers: [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e]
I0120 17:52:14.275981 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.280911 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.284620 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 17:52:14.284694 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 17:52:14.324506 216535 cri.go:89] found id: "17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
I0120 17:52:14.324530 216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:52:14.324536 216535 cri.go:89] found id: ""
I0120 17:52:14.324544 216535 logs.go:282] 2 containers: [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec]
I0120 17:52:14.324602 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.328307 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.331742 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 17:52:14.331812 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 17:52:14.375892 216535 cri.go:89] found id: "583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
I0120 17:52:14.375913 216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:52:14.375919 216535 cri.go:89] found id: ""
I0120 17:52:14.375926 216535 logs.go:282] 2 containers: [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc]
I0120 17:52:14.376011 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.379798 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.383248 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 17:52:14.383317 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 17:52:14.431319 216535 cri.go:89] found id: "2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
I0120 17:52:14.431376 216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:52:14.431382 216535 cri.go:89] found id: ""
I0120 17:52:14.431388 216535 logs.go:282] 2 containers: [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90]
I0120 17:52:14.431444 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.435015 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.438536 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 17:52:14.438604 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 17:52:14.483659 216535 cri.go:89] found id: "dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
I0120 17:52:14.483691 216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:52:14.483697 216535 cri.go:89] found id: ""
I0120 17:52:14.483703 216535 logs.go:282] 2 containers: [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42]
I0120 17:52:14.483778 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.487550 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.491261 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 17:52:14.491399 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 17:52:14.537554 216535 cri.go:89] found id: "c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
I0120 17:52:14.537574 216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:52:14.537580 216535 cri.go:89] found id: ""
I0120 17:52:14.537587 216535 logs.go:282] 2 containers: [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f]
I0120 17:52:14.537645 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.541369 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.544958 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 17:52:14.545047 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 17:52:14.582569 216535 cri.go:89] found id: "6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
I0120 17:52:14.582592 216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:52:14.582598 216535 cri.go:89] found id: ""
I0120 17:52:14.582605 216535 logs.go:282] 2 containers: [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f]
I0120 17:52:14.582683 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.586500 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.590053 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 17:52:14.590126 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 17:52:14.663263 216535 cri.go:89] found id: "027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
I0120 17:52:14.663283 216535 cri.go:89] found id: "91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
I0120 17:52:14.663289 216535 cri.go:89] found id: ""
I0120 17:52:14.663296 216535 logs.go:282] 2 containers: [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd]
I0120 17:52:14.663372 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.666867 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.672075 216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 17:52:14.672174 216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 17:52:14.720019 216535 cri.go:89] found id: "9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
I0120 17:52:14.720042 216535 cri.go:89] found id: ""
I0120 17:52:14.720054 216535 logs.go:282] 1 containers: [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8]
I0120 17:52:14.720116 216535 ssh_runner.go:195] Run: which crictl
I0120 17:52:14.723774 216535 logs.go:123] Gathering logs for kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] ...
I0120 17:52:14.723800 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
I0120 17:52:14.773380 216535 logs.go:123] Gathering logs for storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] ...
I0120 17:52:14.773417 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
I0120 17:52:14.816814 216535 logs.go:123] Gathering logs for kubelet ...
I0120 17:52:14.816842 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 17:52:14.876608 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:34 old-k8s-version-145659 kubelet[662]: E0120 17:46:34.880251 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.876839 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:35 old-k8s-version-145659 kubelet[662]: E0120 17:46:35.605048 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.879700 216535 logs.go:138] Found kubelet problem: Jan 20 17:46:50 old-k8s-version-145659 kubelet[662]: E0120 17:46:50.413085 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.883739 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:03 old-k8s-version-145659 kubelet[662]: E0120 17:47:03.698813 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.883950 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.404037 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.884282 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.706245 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.884720 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.711644 662 pod_workers.go:191] Error syncing pod ceb78d8f-604f-44e7-a643-6a7788c747ae ("storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"
W0120 17:52:14.885047 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.712757 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.886100 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:18 old-k8s-version-145659 kubelet[662]: E0120 17:47:18.760650 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.888645 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:19 old-k8s-version-145659 kubelet[662]: E0120 17:47:19.413053 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.889002 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:23 old-k8s-version-145659 kubelet[662]: E0120 17:47:23.877153 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.889194 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:31 old-k8s-version-145659 kubelet[662]: E0120 17:47:31.403908 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.889559 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:36 old-k8s-version-145659 kubelet[662]: E0120 17:47:36.403402 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.889746 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:46 old-k8s-version-145659 kubelet[662]: E0120 17:47:46.412253 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.890333 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:48 old-k8s-version-145659 kubelet[662]: E0120 17:47:48.845203 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.890660 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:53 old-k8s-version-145659 kubelet[662]: E0120 17:47:53.876712 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.890848 216535 logs.go:138] Found kubelet problem: Jan 20 17:47:58 old-k8s-version-145659 kubelet[662]: E0120 17:47:58.411076 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.891179 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:06 old-k8s-version-145659 kubelet[662]: E0120 17:48:06.403375 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.893674 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:12 old-k8s-version-145659 kubelet[662]: E0120 17:48:12.422259 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.894035 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:21 old-k8s-version-145659 kubelet[662]: E0120 17:48:21.403254 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.894400 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:25 old-k8s-version-145659 kubelet[662]: E0120 17:48:25.404070 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.895006 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:34 old-k8s-version-145659 kubelet[662]: E0120 17:48:34.988709 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.895192 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:39 old-k8s-version-145659 kubelet[662]: E0120 17:48:39.403769 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.895564 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:43 old-k8s-version-145659 kubelet[662]: E0120 17:48:43.877519 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.895751 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:53 old-k8s-version-145659 kubelet[662]: E0120 17:48:53.403792 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.896077 216535 logs.go:138] Found kubelet problem: Jan 20 17:48:58 old-k8s-version-145659 kubelet[662]: E0120 17:48:58.408685 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.896260 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:06 old-k8s-version-145659 kubelet[662]: E0120 17:49:06.403734 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.896584 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:11 old-k8s-version-145659 kubelet[662]: E0120 17:49:11.403959 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.896768 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:18 old-k8s-version-145659 kubelet[662]: E0120 17:49:18.408125 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.897094 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:24 old-k8s-version-145659 kubelet[662]: E0120 17:49:24.407972 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.897306 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:30 old-k8s-version-145659 kubelet[662]: E0120 17:49:30.404331 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.897633 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:37 old-k8s-version-145659 kubelet[662]: E0120 17:49:37.403265 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.900069 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:45 old-k8s-version-145659 kubelet[662]: E0120 17:49:45.414508 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0120 17:52:14.900399 216535 logs.go:138] Found kubelet problem: Jan 20 17:49:48 old-k8s-version-145659 kubelet[662]: E0120 17:49:48.403936 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.900588 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:00 old-k8s-version-145659 kubelet[662]: E0120 17:50:00.404116 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.901175 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:04 old-k8s-version-145659 kubelet[662]: E0120 17:50:04.268511 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.901358 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:12 old-k8s-version-145659 kubelet[662]: E0120 17:50:12.407685 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.901683 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:13 old-k8s-version-145659 kubelet[662]: E0120 17:50:13.876917 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.901866 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:25 old-k8s-version-145659 kubelet[662]: E0120 17:50:25.403750 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.902191 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:28 old-k8s-version-145659 kubelet[662]: E0120 17:50:28.405640 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.902379 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:40 old-k8s-version-145659 kubelet[662]: E0120 17:50:40.403822 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.902706 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.902892 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.903219 216535 logs.go:138] Found kubelet problem: Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.903413 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.903739 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.903923 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.904249 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.904433 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.904758 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.904944 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.905272 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.905457 216535 logs.go:138] Found kubelet problem: Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.905785 216535 logs.go:138] Found kubelet problem: Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:14.905970 216535 logs.go:138] Found kubelet problem: Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:14.906299 216535 logs.go:138] Found kubelet problem: Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
I0120 17:52:14.906310 216535 logs.go:123] Gathering logs for kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] ...
I0120 17:52:14.906325 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
I0120 17:52:14.972580 216535 logs.go:123] Gathering logs for coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] ...
I0120 17:52:14.972618 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
I0120 17:52:15.024121 216535 logs.go:123] Gathering logs for containerd ...
I0120 17:52:15.024165 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 17:52:15.100734 216535 logs.go:123] Gathering logs for describe nodes ...
I0120 17:52:15.100774 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 17:52:15.284993 216535 logs.go:123] Gathering logs for coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] ...
I0120 17:52:15.285026 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
I0120 17:52:15.335235 216535 logs.go:123] Gathering logs for kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] ...
I0120 17:52:15.335264 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
I0120 17:52:15.374772 216535 logs.go:123] Gathering logs for storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] ...
I0120 17:52:15.374806 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
I0120 17:52:15.433634 216535 logs.go:123] Gathering logs for container status ...
I0120 17:52:15.433663 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 17:52:15.488059 216535 logs.go:123] Gathering logs for etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] ...
I0120 17:52:15.488091 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
I0120 17:52:15.542254 216535 logs.go:123] Gathering logs for kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] ...
I0120 17:52:15.542284 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
I0120 17:52:15.582486 216535 logs.go:123] Gathering logs for kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] ...
I0120 17:52:15.582513 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
I0120 17:52:15.660944 216535 logs.go:123] Gathering logs for kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] ...
I0120 17:52:15.661023 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
I0120 17:52:15.709672 216535 logs.go:123] Gathering logs for kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] ...
I0120 17:52:15.709763 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
I0120 17:52:15.755613 216535 logs.go:123] Gathering logs for kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] ...
I0120 17:52:15.755647 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
I0120 17:52:15.794100 216535 logs.go:123] Gathering logs for kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] ...
I0120 17:52:15.794126 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
I0120 17:52:15.876898 216535 logs.go:123] Gathering logs for kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] ...
I0120 17:52:15.876935 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
I0120 17:52:15.937814 216535 logs.go:123] Gathering logs for dmesg ...
I0120 17:52:15.937842 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 17:52:15.955450 216535 logs.go:123] Gathering logs for kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] ...
I0120 17:52:15.955481 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
I0120 17:52:16.047655 216535 logs.go:123] Gathering logs for etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] ...
I0120 17:52:16.047691 216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
I0120 17:52:16.094113 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:52:16.094145 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 17:52:16.094250 216535 out.go:270] X Problems detected in kubelet:
W0120 17:52:16.094269 216535 out.go:270] Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:16.094283 216535 out.go:270] Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:16.094294 216535 out.go:270] Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
W0120 17:52:16.094301 216535 out.go:270] Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 17:52:16.094307 216535 out.go:270] Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
I0120 17:52:16.094313 216535 out.go:358] Setting ErrFile to fd 2...
I0120 17:52:16.094320 216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:52:26.095908 216535 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0120 17:52:26.165226 216535 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0120 17:52:26.168436 216535 out.go:201]
W0120 17:52:26.171235 216535 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0120 17:52:26.171279 216535 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0120 17:52:26.171300 216535 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0120 17:52:26.171306 216535 out.go:270] *
W0120 17:52:26.172503 216535 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0120 17:52:26.175703 216535 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7188f8c06a3b3 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 c8fc9fd032271 dashboard-metrics-scraper-8d5bb5db8-cl8l4
027296a495300 ba04bb24b9575 5 minutes ago Running storage-provisioner 3 c99266b0f867a storage-provisioner
9d777334c1d3a 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 1bc38e3d090e3 kubernetes-dashboard-cd95d586-httgs
442a35203c65d 1611cd07b61d5 5 minutes ago Running busybox 1 76333aa0b4be7 busybox
91b967b2a1923 ba04bb24b9575 5 minutes ago Exited storage-provisioner 2 c99266b0f867a storage-provisioner
6471c303e0b43 2be0bcf609c65 5 minutes ago Running kindnet-cni 1 547911b4053be kindnet-lqrj9
583937fe82126 db91994f4ee8f 5 minutes ago Running coredns 1 78fc3bab14b29 coredns-74ff55c5b-gtjp2
dcaa2ffccfffd 25a5233254979 5 minutes ago Running kube-proxy 1 e8b9756b5dba1 kube-proxy-mxqgj
2c63e2dabdc91 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 6eb729587ce65 kube-scheduler-old-k8s-version-145659
17f42bfa9e9d5 05b738aa1bc63 6 minutes ago Running etcd 1 2c11c7fde733f etcd-old-k8s-version-145659
c5b412b8a50ed 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 1d45273f8403b kube-controller-manager-old-k8s-version-145659
f8793ba82cf05 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 44abba68b72db kube-apiserver-old-k8s-version-145659
f8e366ebf8ddf 1611cd07b61d5 6 minutes ago Exited busybox 0 3ad30ae7d131f busybox
c290ff766fe32 db91994f4ee8f 8 minutes ago Exited coredns 0 72f070c27b32a coredns-74ff55c5b-gtjp2
c56a4a523a5ef 2be0bcf609c65 8 minutes ago Exited kindnet-cni 0 ba9b4fddad7e6 kindnet-lqrj9
6be91b040ecf6 25a5233254979 8 minutes ago Exited kube-proxy 0 2d7942fce4b20 kube-proxy-mxqgj
9e7debe8caa85 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 4d603e4e91de4 kube-scheduler-old-k8s-version-145659
a3c342d8958e0 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 6ee593deec223 kube-apiserver-old-k8s-version-145659
6be5e3acc8c3c 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 a194d95ea6f50 kube-controller-manager-old-k8s-version-145659
658c4e5a0b63e 05b738aa1bc63 8 minutes ago Exited etcd 0 3b7de13a4f79d etcd-old-k8s-version-145659
==> containerd <==
Jan 20 17:48:12 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:12.421777380Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.414102638Z" level=info msg="CreateContainer within sandbox \"c8fc9fd0322711bf7409db7fc055c6d3a6c4056ca87d9ffcc5a0da3be450a7f8\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.435965181Z" level=info msg="CreateContainer within sandbox \"c8fc9fd0322711bf7409db7fc055c6d3a6c4056ca87d9ffcc5a0da3be450a7f8\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\""
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.436861349Z" level=info msg="StartContainer for \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\""
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.511720980Z" level=info msg="StartContainer for \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\" returns successfully"
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.511883245Z" level=info msg="received exit event container_id:\"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\" id:\"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\" pid:3071 exit_status:255 exited_at:{seconds:1737395314 nanos:508816228}"
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.544159399Z" level=info msg="shim disconnected" id=4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37 namespace=k8s.io
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.544222128Z" level=warning msg="cleaning up after shim disconnected" id=4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37 namespace=k8s.io
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.544231564Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.990379726Z" level=info msg="RemoveContainer for \"3520634dcc87f6efc329f938ca9ec9a853f8815395f1f46fa9549f1b259dee86\""
Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.998061739Z" level=info msg="RemoveContainer for \"3520634dcc87f6efc329f938ca9ec9a853f8815395f1f46fa9549f1b259dee86\" returns successfully"
Jan 20 17:49:45 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:49:45.405350188Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:49:45 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:49:45.411210393Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Jan 20 17:49:45 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:49:45.413351285Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jan 20 17:49:45 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:49:45.413389324Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.405411386Z" level=info msg="CreateContainer within sandbox \"c8fc9fd0322711bf7409db7fc055c6d3a6c4056ca87d9ffcc5a0da3be450a7f8\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.424497647Z" level=info msg="CreateContainer within sandbox \"c8fc9fd0322711bf7409db7fc055c6d3a6c4056ca87d9ffcc5a0da3be450a7f8\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\""
Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.425311739Z" level=info msg="StartContainer for \"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\""
Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.507934237Z" level=info msg="StartContainer for \"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\" returns successfully"
Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.509764843Z" level=info msg="received exit event container_id:\"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\" id:\"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\" pid:3309 exit_status:255 exited_at:{seconds:1737395403 nanos:509483578}"
Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.532607815Z" level=info msg="shim disconnected" id=7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab namespace=k8s.io
Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.532668944Z" level=warning msg="cleaning up after shim disconnected" id=7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab namespace=k8s.io
Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.532679372Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 17:50:04 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:04.270019870Z" level=info msg="RemoveContainer for \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\""
Jan 20 17:50:04 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:04.276782651Z" level=info msg="RemoveContainer for \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\" returns successfully"
==> coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:54633 - 34674 "HINFO IN 1620636510534185632.5118592647921316763. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026162743s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0120 17:47:05.190043 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 17:46:35.18949319 +0000 UTC m=+0.083059828) (total time: 30.000451714s):
Trace[2019727887]: [30.000451714s] [30.000451714s] END
E0120 17:47:05.190075 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0120 17:47:05.201106 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 17:46:35.200720128 +0000 UTC m=+0.094286758) (total time: 30.000358028s):
Trace[939984059]: [30.000358028s] [30.000358028s] END
E0120 17:47:05.201130 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0120 17:47:05.201207 1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 17:46:35.201021906 +0000 UTC m=+0.094588536) (total time: 30.000175176s):
Trace[1474941318]: [30.000175176s] [30.000175176s] END
E0120 17:47:05.201217 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:34633 - 7735 "HINFO IN 8473092561579625720.1642975792137428739. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.069892168s
==> describe nodes <==
Name: old-k8s-version-145659
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-145659
kubernetes.io/os=linux
minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
minikube.k8s.io/name=old-k8s-version-145659
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_20T17_43_49_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 20 Jan 2025 17:43:45 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-145659
AcquireTime: <unset>
RenewTime: Mon, 20 Jan 2025 17:52:26 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 20 Jan 2025 17:52:26 +0000 Mon, 20 Jan 2025 17:43:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 20 Jan 2025 17:52:26 +0000 Mon, 20 Jan 2025 17:43:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 20 Jan 2025 17:52:26 +0000 Mon, 20 Jan 2025 17:43:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 20 Jan 2025 17:52:26 +0000 Mon, 20 Jan 2025 17:44:04 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-145659
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022308Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022308Ki
pods: 110
System Info:
Machine ID: 2288b08bfb774fab9c2db8bb9a3f2e51
System UUID: fd62fb04-58fa-4af2-9e2d-f153fa752255
Boot ID: 39eacc08-2a64-468f-9148-fca198b76ea1
Kernel Version: 5.15.0-1075-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.24
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m47s
kube-system coredns-74ff55c5b-gtjp2 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m24s
kube-system etcd-old-k8s-version-145659 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m31s
kube-system kindnet-lqrj9 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m24s
kube-system kube-apiserver-old-k8s-version-145659 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m31s
kube-system kube-controller-manager-old-k8s-version-145659 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m31s
kube-system kube-proxy-mxqgj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kube-system kube-scheduler-old-k8s-version-145659 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m31s
kube-system metrics-server-9975d5f86-wxlv8 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m35s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m23s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-cl8l4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
kubernetes-dashboard kubernetes-dashboard-cd95d586-httgs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m51s (x4 over 8m51s) kubelet Node old-k8s-version-145659 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m51s (x5 over 8m51s) kubelet Node old-k8s-version-145659 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m51s (x4 over 8m51s) kubelet Node old-k8s-version-145659 status is now: NodeHasSufficientPID
Normal Starting 8m31s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m31s kubelet Node old-k8s-version-145659 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m31s kubelet Node old-k8s-version-145659 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m31s kubelet Node old-k8s-version-145659 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m31s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m24s kubelet Node old-k8s-version-145659 status is now: NodeReady
Normal Starting 8m22s kube-proxy Starting kube-proxy.
Normal Starting 6m6s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m6s (x8 over 6m6s) kubelet Node old-k8s-version-145659 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m6s (x8 over 6m6s) kubelet Node old-k8s-version-145659 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m6s (x7 over 6m6s) kubelet Node old-k8s-version-145659 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m6s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m53s kube-proxy Starting kube-proxy.
==> dmesg <==
[Jan20 16:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014724] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.513857] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.029310] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.772508] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +5.261433] kauditd_printk_skb: 36 callbacks suppressed
[Jan20 17:36] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
==> etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] <==
2025-01-20 17:48:25.120706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:48:35.120566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:48:45.121152 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:48:55.120696 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:49:05.120775 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:49:15.121566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:49:25.120919 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:49:35.120743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:49:45.120973 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:49:55.120626 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:50:05.120760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:50:15.120618 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:50:25.120732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:50:35.120740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:50:45.123185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:50:55.120862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:51:05.120833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:51:15.120913 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:51:25.120593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:51:35.120634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:51:45.120959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:51:55.120773 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:52:05.120967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:52:15.120952 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:52:25.120621 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] <==
raft2025/01/20 17:43:38 INFO: ea7e25599daad906 became candidate at term 2
raft2025/01/20 17:43:38 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/01/20 17:43:38 INFO: ea7e25599daad906 became leader at term 2
raft2025/01/20 17:43:38 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-01-20 17:43:38.554146 I | etcdserver: setting up the initial cluster version to 3.4
2025-01-20 17:43:38.555242 N | etcdserver/membership: set the initial cluster version to 3.4
2025-01-20 17:43:38.555433 I | etcdserver/api: enabled capabilities for version 3.4
2025-01-20 17:43:38.555507 I | etcdserver: published {Name:old-k8s-version-145659 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-01-20 17:43:38.555532 I | embed: ready to serve client requests
2025-01-20 17:43:38.560722 I | embed: serving client requests on 127.0.0.1:2379
2025-01-20 17:43:38.573464 I | embed: ready to serve client requests
2025-01-20 17:43:38.575247 I | embed: serving client requests on 192.168.76.2:2379
2025-01-20 17:43:58.449177 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:43:59.406240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:44:09.406301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:44:19.406306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:44:29.406428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:44:39.406640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:44:49.406549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:44:59.406475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:45:09.406591 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:45:19.406485 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:45:29.406436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:45:39.406425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 17:45:49.406346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
17:52:29 up 1:34, 0 users, load average: 1.63, 1.77, 2.17
Linux old-k8s-version-145659 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] <==
I0120 17:50:25.833249 1 main.go:301] handling current node
I0120 17:50:35.824932 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:50:35.824965 1 main.go:301] handling current node
I0120 17:50:45.824564 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:50:45.824602 1 main.go:301] handling current node
I0120 17:50:55.833332 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:50:55.833365 1 main.go:301] handling current node
I0120 17:51:05.824500 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:51:05.824633 1 main.go:301] handling current node
I0120 17:51:15.827875 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:51:15.827911 1 main.go:301] handling current node
I0120 17:51:25.833825 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:51:25.833864 1 main.go:301] handling current node
I0120 17:51:35.824196 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:51:35.824286 1 main.go:301] handling current node
I0120 17:51:45.833048 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:51:45.833082 1 main.go:301] handling current node
I0120 17:51:55.831597 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:51:55.831632 1 main.go:301] handling current node
I0120 17:52:05.831428 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:52:05.831678 1 main.go:301] handling current node
I0120 17:52:15.832888 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:52:15.832921 1 main.go:301] handling current node
I0120 17:52:25.833767 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:52:25.833806 1 main.go:301] handling current node
==> kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] <==
I0120 17:44:07.536826 1 controller.go:365] Waiting for informer caches to sync
I0120 17:44:07.536833 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0120 17:44:07.723505 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0120 17:44:07.723545 1 metrics.go:61] Registering metrics
I0120 17:44:07.723617 1 controller.go:401] Syncing nftables rules
I0120 17:44:17.544127 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:44:17.544187 1 main.go:301] handling current node
I0120 17:44:27.536583 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:44:27.536617 1 main.go:301] handling current node
I0120 17:44:37.536660 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:44:37.536723 1 main.go:301] handling current node
I0120 17:44:47.544711 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:44:47.544746 1 main.go:301] handling current node
I0120 17:44:57.543782 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:44:57.543815 1 main.go:301] handling current node
I0120 17:45:07.537343 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:45:07.537376 1 main.go:301] handling current node
I0120 17:45:17.544285 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:45:17.544318 1 main.go:301] handling current node
I0120 17:45:27.543267 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:45:27.543300 1 main.go:301] handling current node
I0120 17:45:37.536990 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:45:37.537020 1 main.go:301] handling current node
I0120 17:45:47.540337 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0120 17:45:47.540398 1 main.go:301] handling current node
==> kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] <==
I0120 17:43:46.444805 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0120 17:43:46.444856 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0120 17:43:46.460114 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0120 17:43:46.468180 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0120 17:43:46.468201 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0120 17:43:46.963006 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0120 17:43:47.023197 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0120 17:43:47.149965 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0120 17:43:47.151201 1 controller.go:606] quota admission added evaluator for: endpoints
I0120 17:43:47.156822 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0120 17:43:48.087063 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0120 17:43:48.749417 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0120 17:43:48.822899 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0120 17:43:57.253673 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0120 17:44:03.986816 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0120 17:44:04.096335 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0120 17:44:19.554892 1 client.go:360] parsed scheme: "passthrough"
I0120 17:44:19.554937 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:44:19.554971 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 17:44:51.192186 1 client.go:360] parsed scheme: "passthrough"
I0120 17:44:51.192232 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:44:51.192265 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 17:45:29.900908 1 client.go:360] parsed scheme: "passthrough"
I0120 17:45:29.900951 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:45:29.900960 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] <==
I0120 17:48:56.190147 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:48:56.190181 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 17:49:31.253359 1 client.go:360] parsed scheme: "passthrough"
I0120 17:49:31.253405 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:49:31.253415 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0120 17:49:36.356501 1 handler_proxy.go:102] no RequestInfo found in the context
E0120 17:49:36.356702 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0120 17:49:36.356720 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 17:50:05.104413 1 client.go:360] parsed scheme: "passthrough"
I0120 17:50:05.104482 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:50:05.104492 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 17:50:43.727916 1 client.go:360] parsed scheme: "passthrough"
I0120 17:50:43.727965 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:50:43.728131 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 17:51:23.081480 1 client.go:360] parsed scheme: "passthrough"
I0120 17:51:23.081528 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:51:23.081537 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0120 17:51:34.244674 1 handler_proxy.go:102] no RequestInfo found in the context
E0120 17:51:34.244741 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0120 17:51:34.244755 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 17:52:01.897998 1 client.go:360] parsed scheme: "passthrough"
I0120 17:52:01.898055 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 17:52:01.898064 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] <==
I0120 17:44:04.086808 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-145659" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0120 17:44:04.091989 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-65gdx"
I0120 17:44:04.099772 1 range_allocator.go:373] Set node old-k8s-version-145659 PodCIDR to [10.244.0.0/24]
I0120 17:44:04.100084 1 shared_informer.go:247] Caches are synced for expand
E0120 17:44:04.124613 1 range_allocator.go:361] Node old-k8s-version-145659 already has a CIDR allocated [10.244.0.0/24]. Releasing the new one.
E0120 17:44:04.136086 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0120 17:44:04.137229 1 shared_informer.go:247] Caches are synced for resource quota
I0120 17:44:04.137362 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-gtjp2"
E0120 17:44:04.157964 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0120 17:44:04.159249 1 shared_informer.go:247] Caches are synced for resource quota
I0120 17:44:04.184238 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mxqgj"
I0120 17:44:04.203457 1 shared_informer.go:247] Caches are synced for attach detach
I0120 17:44:04.205063 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lqrj9"
I0120 17:44:04.299782 1 shared_informer.go:247] Caches are synced for persistent volume
E0120 17:44:04.300497 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"49a018a0-e8a9-49ea-a4f8-032b341ec2c5", ResourceVersion:"258", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63872991828, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001675c60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001675c80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001675ca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001a0e280), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001675
cc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001675ce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001675d20)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40019e8ba0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000dc2a58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a6d490), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002f3d58)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000dc2aa8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0120 17:44:04.411159 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0120 17:44:04.711619 1 shared_informer.go:247] Caches are synced for garbage collector
I0120 17:44:04.729775 1 shared_informer.go:247] Caches are synced for garbage collector
I0120 17:44:04.729799 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0120 17:44:05.365440 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0120 17:44:05.401172 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-65gdx"
I0120 17:44:09.036782 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0120 17:45:52.714577 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0120 17:45:52.920094 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0120 17:45:52.928601 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
==> kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] <==
E0120 17:48:24.790405 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 17:48:30.388263 1 request.go:655] Throttling request took 1.048091094s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 17:48:31.241431 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 17:48:55.292217 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 17:49:02.892477 1 request.go:655] Throttling request took 1.047991938s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 17:49:03.743947 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 17:49:25.794091 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 17:49:35.394526 1 request.go:655] Throttling request took 1.048487874s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
W0120 17:49:36.245593 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 17:49:56.296073 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 17:50:07.896224 1 request.go:655] Throttling request took 1.048326156s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 17:50:08.747517 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 17:50:26.798212 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 17:50:40.397979 1 request.go:655] Throttling request took 1.048255406s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 17:50:41.249198 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 17:50:57.300029 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 17:51:12.899756 1 request.go:655] Throttling request took 1.048463917s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
W0120 17:51:13.751175 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 17:51:27.801946 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 17:51:45.401766 1 request.go:655] Throttling request took 1.048422857s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
W0120 17:51:46.253066 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 17:51:58.304342 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 17:52:17.903502 1 request.go:655] Throttling request took 1.04843924s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
W0120 17:52:18.754875 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 17:52:28.814922 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] <==
I0120 17:44:06.502728 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0120 17:44:06.502816 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0120 17:44:06.531563 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0120 17:44:06.531669 1 server_others.go:185] Using iptables Proxier.
I0120 17:44:06.531921 1 server.go:650] Version: v1.20.0
I0120 17:44:06.532433 1 config.go:315] Starting service config controller
I0120 17:44:06.532442 1 shared_informer.go:240] Waiting for caches to sync for service config
I0120 17:44:06.532457 1 config.go:224] Starting endpoint slice config controller
I0120 17:44:06.532461 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0120 17:44:06.632523 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0120 17:44:06.632588 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] <==
I0120 17:46:35.537525 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0120 17:46:35.537681 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0120 17:46:35.577936 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0120 17:46:35.578207 1 server_others.go:185] Using iptables Proxier.
I0120 17:46:35.578587 1 server.go:650] Version: v1.20.0
I0120 17:46:35.579663 1 config.go:315] Starting service config controller
I0120 17:46:35.579751 1 shared_informer.go:240] Waiting for caches to sync for service config
I0120 17:46:35.579823 1 config.go:224] Starting endpoint slice config controller
I0120 17:46:35.579868 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0120 17:46:35.679926 1 shared_informer.go:247] Caches are synced for service config
I0120 17:46:35.680047 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] <==
I0120 17:46:27.554936 1 serving.go:331] Generated self-signed cert in-memory
W0120 17:46:33.249433 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0120 17:46:33.249463 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0120 17:46:33.249478 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0120 17:46:33.249483 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0120 17:46:33.504944 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0120 17:46:33.518455 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 17:46:33.520562 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 17:46:33.521669 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0120 17:46:33.621007 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] <==
I0120 17:43:40.516697 1 serving.go:331] Generated self-signed cert in-memory
W0120 17:43:45.593592 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0120 17:43:45.593833 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0120 17:43:45.593946 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0120 17:43:45.593955 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0120 17:43:45.671869 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 17:43:45.671903 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 17:43:45.672810 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0120 17:43:45.673044 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0120 17:43:45.686917 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 17:43:45.689160 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 17:43:45.689297 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 17:43:45.689362 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 17:43:45.689428 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 17:43:45.689491 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 17:43:45.694813 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 17:43:45.695121 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 17:43:45.695335 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 17:43:45.695664 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 17:43:45.695898 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 17:43:45.699460 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 17:43:46.551048 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 17:43:46.693907 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 17:43:46.741428 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0120 17:43:47.172062 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: I0120 17:50:56.402944 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: I0120 17:51:10.403051 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: I0120 17:51:22.403948 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: I0120 17:51:33.402925 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: I0120 17:51:47.403014 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: I0120 17:52:02.407403 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: I0120 17:52:14.404937 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
Jan 20 17:52:24 old-k8s-version-145659 kubelet[662]: E0120 17:52:24.404016 662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 17:52:29 old-k8s-version-145659 kubelet[662]: I0120 17:52:29.402951 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
Jan 20 17:52:29 old-k8s-version-145659 kubelet[662]: E0120 17:52:29.403433 662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
==> kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] <==
2025/01/20 17:46:57 Using namespace: kubernetes-dashboard
2025/01/20 17:46:57 Using in-cluster config to connect to apiserver
2025/01/20 17:46:57 Using secret token for csrf signing
2025/01/20 17:46:57 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/01/20 17:46:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/01/20 17:46:57 Successful initial request to the apiserver, version: v1.20.0
2025/01/20 17:46:57 Generating JWE encryption key
2025/01/20 17:46:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/01/20 17:46:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/01/20 17:46:58 Initializing JWE encryption key from synchronized object
2025/01/20 17:46:58 Creating in-cluster Sidecar client
2025/01/20 17:46:58 Serving insecurely on HTTP port: 9090
2025/01/20 17:46:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:47:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:47:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:48:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:48:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:49:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:49:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:50:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:50:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:51:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:51:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:52:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 17:46:57 Starting overwatch
==> storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] <==
I0120 17:47:16.522832 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0120 17:47:16.544962 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0120 17:47:16.545174 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0120 17:47:33.995668 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0120 17:47:33.995957 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-145659_991e8f58-ec24-4b17-89db-bafb81509e25!
I0120 17:47:33.996994 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"050d1ce1-85b4-4c9b-b2e6-8b644a582fe8", APIVersion:"v1", ResourceVersion:"831", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-145659_991e8f58-ec24-4b17-89db-bafb81509e25 became leader
I0120 17:47:34.096938 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-145659_991e8f58-ec24-4b17-89db-bafb81509e25!
==> storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] <==
I0120 17:46:35.359063 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0120 17:47:05.365793 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145659 -n old-k8s-version-145659
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-145659 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-wxlv8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-145659 describe pod metrics-server-9975d5f86-wxlv8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-145659 describe pod metrics-server-9975d5f86-wxlv8: exit status 1 (151.528309ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-wxlv8" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-145659 describe pod metrics-server-9975d5f86-wxlv8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (385.51s)