=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-684625 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E0217 13:18:46.325259 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-684625 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m15.425204774s)
-- stdout --
* [old-k8s-version-684625] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20427
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
* Using the docker driver based on existing profile
* Starting "old-k8s-version-684625" primary control-plane node in "old-k8s-version-684625" cluster
* Pulling base image v0.0.46-1739182054-20387 ...
* Restarting existing docker container for "old-k8s-version-684625" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-684625 addons enable metrics-server
* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
-- /stdout --
** stderr **
I0217 13:18:08.625965 2295157 out.go:345] Setting OutFile to fd 1 ...
I0217 13:18:08.626170 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 13:18:08.626197 2295157 out.go:358] Setting ErrFile to fd 2...
I0217 13:18:08.626214 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 13:18:08.626489 2295157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
I0217 13:18:08.626893 2295157 out.go:352] Setting JSON to false
I0217 13:18:08.627937 2295157 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":309452,"bootTime":1739488837,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I0217 13:18:08.628033 2295157 start.go:139] virtualization:
I0217 13:18:08.631539 2295157 out.go:177] * [old-k8s-version-684625] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0217 13:18:08.635491 2295157 out.go:177] - MINIKUBE_LOCATION=20427
I0217 13:18:08.635560 2295157 notify.go:220] Checking for updates...
I0217 13:18:08.641709 2295157 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0217 13:18:08.645128 2295157 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
I0217 13:18:08.647976 2295157 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
I0217 13:18:08.651401 2295157 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0217 13:18:08.654466 2295157 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0217 13:18:08.658109 2295157 config.go:182] Loaded profile config "old-k8s-version-684625": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0217 13:18:08.661711 2295157 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
I0217 13:18:08.664628 2295157 driver.go:394] Setting default libvirt URI to qemu:///system
I0217 13:18:08.718041 2295157 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0217 13:18:08.718242 2295157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0217 13:18:08.795607 2295157 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:69 SystemTime:2025-02-17 13:18:08.784630881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0217 13:18:08.795714 2295157 docker.go:318] overlay module found
I0217 13:18:08.799208 2295157 out.go:177] * Using the docker driver based on existing profile
I0217 13:18:08.802023 2295157 start.go:297] selected driver: docker
I0217 13:18:08.802043 2295157 start.go:901] validating driver "docker" against &{Name:old-k8s-version-684625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0217 13:18:08.802166 2295157 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0217 13:18:08.802943 2295157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0217 13:18:08.868190 2295157 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:69 SystemTime:2025-02-17 13:18:08.85836252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0217 13:18:08.868577 2295157 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0217 13:18:08.868608 2295157 cni.go:84] Creating CNI manager for ""
I0217 13:18:08.868646 2295157 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0217 13:18:08.868693 2295157 start.go:340] cluster config:
{Name:old-k8s-version-684625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0217 13:18:08.872387 2295157 out.go:177] * Starting "old-k8s-version-684625" primary control-plane node in "old-k8s-version-684625" cluster
I0217 13:18:08.874510 2295157 cache.go:121] Beginning downloading kic base image for docker with containerd
I0217 13:18:08.878108 2295157 out.go:177] * Pulling base image v0.0.46-1739182054-20387 ...
I0217 13:18:08.882190 2295157 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0217 13:18:08.882250 2295157 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0217 13:18:08.882264 2295157 cache.go:56] Caching tarball of preloaded images
I0217 13:18:08.882280 2295157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
I0217 13:18:08.882359 2295157 preload.go:172] Found /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0217 13:18:08.882370 2295157 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0217 13:18:08.882482 2295157 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/config.json ...
I0217 13:18:08.904671 2295157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon, skipping pull
I0217 13:18:08.904694 2295157 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad exists in daemon, skipping load
I0217 13:18:08.904714 2295157 cache.go:230] Successfully downloaded all kic artifacts
I0217 13:18:08.904745 2295157 start.go:360] acquireMachinesLock for old-k8s-version-684625: {Name:mka6c369035b962d62683df0b54332779fc916c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0217 13:18:08.904823 2295157 start.go:364] duration metric: took 54.924µs to acquireMachinesLock for "old-k8s-version-684625"
I0217 13:18:08.904848 2295157 start.go:96] Skipping create...Using existing machine configuration
I0217 13:18:08.904857 2295157 fix.go:54] fixHost starting:
I0217 13:18:08.905154 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
I0217 13:18:08.922991 2295157 fix.go:112] recreateIfNeeded on old-k8s-version-684625: state=Stopped err=<nil>
W0217 13:18:08.923026 2295157 fix.go:138] unexpected machine state, will restart: <nil>
I0217 13:18:08.925705 2295157 out.go:177] * Restarting existing docker container for "old-k8s-version-684625" ...
I0217 13:18:08.929383 2295157 cli_runner.go:164] Run: docker start old-k8s-version-684625
I0217 13:18:09.334154 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
I0217 13:18:09.365477 2295157 kic.go:430] container "old-k8s-version-684625" state is running.
I0217 13:18:09.365966 2295157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-684625
I0217 13:18:09.393207 2295157 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/config.json ...
I0217 13:18:09.393443 2295157 machine.go:93] provisionDockerMachine start ...
I0217 13:18:09.393509 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:09.415323 2295157 main.go:141] libmachine: Using SSH client type: native
I0217 13:18:09.415593 2295157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 50067 <nil> <nil>}
I0217 13:18:09.415613 2295157 main.go:141] libmachine: About to run SSH command:
hostname
I0217 13:18:09.417981 2295157 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0217 13:18:12.553517 2295157 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-684625
I0217 13:18:12.553545 2295157 ubuntu.go:169] provisioning hostname "old-k8s-version-684625"
I0217 13:18:12.553626 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:12.578569 2295157 main.go:141] libmachine: Using SSH client type: native
I0217 13:18:12.578824 2295157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 50067 <nil> <nil>}
I0217 13:18:12.578843 2295157 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-684625 && echo "old-k8s-version-684625" | sudo tee /etc/hostname
I0217 13:18:12.727273 2295157 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-684625
I0217 13:18:12.727422 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:12.748467 2295157 main.go:141] libmachine: Using SSH client type: native
I0217 13:18:12.748733 2295157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil> [] 0s} 127.0.0.1 50067 <nil> <nil>}
I0217 13:18:12.748762 2295157 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-684625' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-684625/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-684625' | sudo tee -a /etc/hosts;
fi
fi
I0217 13:18:12.890357 2295157 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0217 13:18:12.890386 2295157 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20427-2080001/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-2080001/.minikube}
I0217 13:18:12.890469 2295157 ubuntu.go:177] setting up certificates
I0217 13:18:12.890490 2295157 provision.go:84] configureAuth start
I0217 13:18:12.890558 2295157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-684625
I0217 13:18:12.941443 2295157 provision.go:143] copyHostCerts
I0217 13:18:12.941511 2295157 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.pem, removing ...
I0217 13:18:12.941532 2295157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.pem
I0217 13:18:12.941612 2295157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.pem (1082 bytes)
I0217 13:18:12.941769 2295157 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-2080001/.minikube/cert.pem, removing ...
I0217 13:18:12.941781 2295157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-2080001/.minikube/cert.pem
I0217 13:18:12.941815 2295157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-2080001/.minikube/cert.pem (1123 bytes)
I0217 13:18:12.942055 2295157 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-2080001/.minikube/key.pem, removing ...
I0217 13:18:12.942064 2295157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-2080001/.minikube/key.pem
I0217 13:18:12.942108 2295157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-2080001/.minikube/key.pem (1675 bytes)
I0217 13:18:12.942264 2295157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-684625 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-684625]
I0217 13:18:14.063393 2295157 provision.go:177] copyRemoteCerts
I0217 13:18:14.063506 2295157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0217 13:18:14.063626 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:14.131458 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:14.275893 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0217 13:18:14.341774 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0217 13:18:14.389465 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0217 13:18:14.433626 2295157 provision.go:87] duration metric: took 1.543116912s to configureAuth
I0217 13:18:14.433767 2295157 ubuntu.go:193] setting minikube options for container-runtime
I0217 13:18:14.433967 2295157 config.go:182] Loaded profile config "old-k8s-version-684625": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0217 13:18:14.433980 2295157 machine.go:96] duration metric: took 5.040521046s to provisionDockerMachine
I0217 13:18:14.433988 2295157 start.go:293] postStartSetup for "old-k8s-version-684625" (driver="docker")
I0217 13:18:14.434003 2295157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0217 13:18:14.434051 2295157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0217 13:18:14.434097 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:14.463063 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:14.606663 2295157 ssh_runner.go:195] Run: cat /etc/os-release
I0217 13:18:14.616548 2295157 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0217 13:18:14.616582 2295157 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0217 13:18:14.616592 2295157 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0217 13:18:14.616599 2295157 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0217 13:18:14.616609 2295157 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-2080001/.minikube/addons for local assets ...
I0217 13:18:14.616662 2295157 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-2080001/.minikube/files for local assets ...
I0217 13:18:14.616739 2295157 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/ssl/certs/20853732.pem -> 20853732.pem in /etc/ssl/certs
I0217 13:18:14.616856 2295157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0217 13:18:14.640038 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/ssl/certs/20853732.pem --> /etc/ssl/certs/20853732.pem (1708 bytes)
I0217 13:18:14.695950 2295157 start.go:296] duration metric: took 261.946482ms for postStartSetup
I0217 13:18:14.696053 2295157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0217 13:18:14.696110 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:14.735500 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:14.863200 2295157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0217 13:18:14.870327 2295157 fix.go:56] duration metric: took 5.965462888s for fixHost
I0217 13:18:14.870350 2295157 start.go:83] releasing machines lock for "old-k8s-version-684625", held for 5.965514629s
I0217 13:18:14.870416 2295157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-684625
I0217 13:18:14.894724 2295157 ssh_runner.go:195] Run: cat /version.json
I0217 13:18:14.894747 2295157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0217 13:18:14.894776 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:14.894801 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:14.921574 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:14.936489 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:15.034814 2295157 ssh_runner.go:195] Run: systemctl --version
I0217 13:18:15.211070 2295157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0217 13:18:15.216305 2295157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0217 13:18:15.235746 2295157 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0217 13:18:15.235824 2295157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0217 13:18:15.245835 2295157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0217 13:18:15.245859 2295157 start.go:495] detecting cgroup driver to use...
I0217 13:18:15.245918 2295157 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0217 13:18:15.246007 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0217 13:18:15.263261 2295157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0217 13:18:15.276831 2295157 docker.go:217] disabling cri-docker service (if available) ...
I0217 13:18:15.276897 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0217 13:18:15.291126 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0217 13:18:15.303941 2295157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0217 13:18:15.407168 2295157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0217 13:18:15.515967 2295157 docker.go:233] disabling docker service ...
I0217 13:18:15.516115 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0217 13:18:15.531249 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0217 13:18:15.544514 2295157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0217 13:18:15.650186 2295157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0217 13:18:15.758151 2295157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0217 13:18:15.772741 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0217 13:18:15.800272 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0217 13:18:15.825800 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0217 13:18:15.844780 2295157 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0217 13:18:15.844870 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0217 13:18:15.855643 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0217 13:18:15.866608 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0217 13:18:15.880396 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0217 13:18:15.889964 2295157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0217 13:18:15.901790 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0217 13:18:15.911888 2295157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0217 13:18:15.921263 2295157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0217 13:18:15.930333 2295157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0217 13:18:16.044646 2295157 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0217 13:18:16.268026 2295157 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0217 13:18:16.268092 2295157 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0217 13:18:16.278321 2295157 start.go:563] Will wait 60s for crictl version
I0217 13:18:16.278435 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:18:16.284084 2295157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0217 13:18:16.337433 2295157 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0217 13:18:16.337497 2295157 ssh_runner.go:195] Run: containerd --version
I0217 13:18:16.368611 2295157 ssh_runner.go:195] Run: containerd --version
I0217 13:18:16.409695 2295157 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
I0217 13:18:16.412710 2295157 cli_runner.go:164] Run: docker network inspect old-k8s-version-684625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0217 13:18:16.435576 2295157 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0217 13:18:16.439455 2295157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0217 13:18:16.454443 2295157 kubeadm.go:883] updating cluster {Name:old-k8s-version-684625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0217 13:18:16.454574 2295157 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0217 13:18:16.454639 2295157 ssh_runner.go:195] Run: sudo crictl images --output json
I0217 13:18:16.505590 2295157 containerd.go:627] all images are preloaded for containerd runtime.
I0217 13:18:16.505697 2295157 containerd.go:534] Images already preloaded, skipping extraction
I0217 13:18:16.505809 2295157 ssh_runner.go:195] Run: sudo crictl images --output json
I0217 13:18:16.556464 2295157 containerd.go:627] all images are preloaded for containerd runtime.
I0217 13:18:16.556483 2295157 cache_images.go:84] Images are preloaded, skipping loading
I0217 13:18:16.556491 2295157 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
I0217 13:18:16.556593 2295157 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-684625 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0217 13:18:16.556647 2295157 ssh_runner.go:195] Run: sudo crictl info
I0217 13:18:16.623899 2295157 cni.go:84] Creating CNI manager for ""
I0217 13:18:16.623979 2295157 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0217 13:18:16.624005 2295157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0217 13:18:16.624059 2295157 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-684625 NodeName:old-k8s-version-684625 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0217 13:18:16.624232 2295157 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-684625"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0217 13:18:16.624337 2295157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0217 13:18:16.634753 2295157 binaries.go:44] Found k8s binaries, skipping transfer
I0217 13:18:16.634929 2295157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0217 13:18:16.644923 2295157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0217 13:18:16.666479 2295157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0217 13:18:16.687885 2295157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0217 13:18:16.712524 2295157 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0217 13:18:16.716303 2295157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0217 13:18:16.728455 2295157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0217 13:18:16.830380 2295157 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0217 13:18:16.845352 2295157 certs.go:68] Setting up /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625 for IP: 192.168.85.2
I0217 13:18:16.845369 2295157 certs.go:194] generating shared ca certs ...
I0217 13:18:16.845385 2295157 certs.go:226] acquiring lock for ca certs: {Name:mk1e57d70f14134ded87b3cd6dacdce4d25ab3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0217 13:18:16.845533 2295157 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.key
I0217 13:18:16.845584 2295157 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/proxy-client-ca.key
I0217 13:18:16.845596 2295157 certs.go:256] generating profile certs ...
I0217 13:18:16.845705 2295157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.key
I0217 13:18:16.845777 2295157 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/apiserver.key.562aa0ca
I0217 13:18:16.845821 2295157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/proxy-client.key
I0217 13:18:16.845932 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/2085373.pem (1338 bytes)
W0217 13:18:16.845967 2295157 certs.go:480] ignoring /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/2085373_empty.pem, impossibly tiny 0 bytes
I0217 13:18:16.845980 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca-key.pem (1679 bytes)
I0217 13:18:16.846007 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem (1082 bytes)
I0217 13:18:16.846034 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/cert.pem (1123 bytes)
I0217 13:18:16.846059 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/key.pem (1675 bytes)
I0217 13:18:16.846105 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/ssl/certs/20853732.pem (1708 bytes)
I0217 13:18:16.846751 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0217 13:18:16.882353 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0217 13:18:16.910722 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0217 13:18:16.971241 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0217 13:18:17.030660 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0217 13:18:17.089201 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0217 13:18:17.120216 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0217 13:18:17.149544 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0217 13:18:17.183779 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/2085373.pem --> /usr/share/ca-certificates/2085373.pem (1338 bytes)
I0217 13:18:17.211314 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/ssl/certs/20853732.pem --> /usr/share/ca-certificates/20853732.pem (1708 bytes)
I0217 13:18:17.238019 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0217 13:18:17.268373 2295157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0217 13:18:17.300227 2295157 ssh_runner.go:195] Run: openssl version
I0217 13:18:17.306629 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2085373.pem && ln -fs /usr/share/ca-certificates/2085373.pem /etc/ssl/certs/2085373.pem"
I0217 13:18:17.318238 2295157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2085373.pem
I0217 13:18:17.325471 2295157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 17 12:38 /usr/share/ca-certificates/2085373.pem
I0217 13:18:17.325615 2295157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2085373.pem
I0217 13:18:17.337430 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2085373.pem /etc/ssl/certs/51391683.0"
I0217 13:18:17.348920 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20853732.pem && ln -fs /usr/share/ca-certificates/20853732.pem /etc/ssl/certs/20853732.pem"
I0217 13:18:17.359292 2295157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20853732.pem
I0217 13:18:17.363434 2295157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 17 12:38 /usr/share/ca-certificates/20853732.pem
I0217 13:18:17.363575 2295157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20853732.pem
I0217 13:18:17.371035 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20853732.pem /etc/ssl/certs/3ec20f2e.0"
I0217 13:18:17.380012 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0217 13:18:17.389703 2295157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0217 13:18:17.393538 2295157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 17 12:32 /usr/share/ca-certificates/minikubeCA.pem
I0217 13:18:17.393681 2295157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0217 13:18:17.401163 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0217 13:18:17.410115 2295157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0217 13:18:17.413935 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0217 13:18:17.420815 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0217 13:18:17.427883 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0217 13:18:17.435074 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0217 13:18:17.443015 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0217 13:18:17.450305 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0217 13:18:17.457621 2295157 kubeadm.go:392] StartCluster: {Name:old-k8s-version-684625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0217 13:18:17.457780 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0217 13:18:17.457888 2295157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0217 13:18:17.526281 2295157 cri.go:89] found id: "d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
I0217 13:18:17.526310 2295157 cri.go:89] found id: "bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
I0217 13:18:17.526326 2295157 cri.go:89] found id: "e25655e00932f6940f9106254c70f637b722255928a692b65231ed7503119f81"
I0217 13:18:17.526332 2295157 cri.go:89] found id: "b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
I0217 13:18:17.526335 2295157 cri.go:89] found id: "eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
I0217 13:18:17.526339 2295157 cri.go:89] found id: "b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
I0217 13:18:17.526342 2295157 cri.go:89] found id: "6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
I0217 13:18:17.526345 2295157 cri.go:89] found id: "50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
I0217 13:18:17.526352 2295157 cri.go:89] found id: ""
I0217 13:18:17.526428 2295157 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0217 13:18:17.547002 2295157 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-02-17T13:18:17Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0217 13:18:17.547190 2295157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0217 13:18:17.559119 2295157 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0217 13:18:17.559204 2295157 kubeadm.go:593] restartPrimaryControlPlane start ...
I0217 13:18:17.559340 2295157 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0217 13:18:17.571446 2295157 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0217 13:18:17.572148 2295157 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-684625" does not appear in /home/jenkins/minikube-integration/20427-2080001/kubeconfig
I0217 13:18:17.572398 2295157 kubeconfig.go:62] /home/jenkins/minikube-integration/20427-2080001/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-684625" cluster setting kubeconfig missing "old-k8s-version-684625" context setting]
I0217 13:18:17.572884 2295157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-2080001/kubeconfig: {Name:mk44077e5743bb96254549e3eaf259b0845749a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0217 13:18:17.575010 2295157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0217 13:18:17.587613 2295157 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0217 13:18:17.587716 2295157 kubeadm.go:597] duration metric: took 28.474835ms to restartPrimaryControlPlane
I0217 13:18:17.587750 2295157 kubeadm.go:394] duration metric: took 130.139463ms to StartCluster
I0217 13:18:17.587814 2295157 settings.go:142] acquiring lock: {Name:mk54d8990a2b55fcc4b6e61aceb051d4e6e4e25d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0217 13:18:17.587934 2295157 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20427-2080001/kubeconfig
I0217 13:18:17.588794 2295157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-2080001/kubeconfig: {Name:mk44077e5743bb96254549e3eaf259b0845749a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0217 13:18:17.589153 2295157 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0217 13:18:17.589723 2295157 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0217 13:18:17.589837 2295157 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-684625"
I0217 13:18:17.589861 2295157 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-684625"
W0217 13:18:17.589872 2295157 addons.go:247] addon storage-provisioner should already be in state true
I0217 13:18:17.589902 2295157 host.go:66] Checking if "old-k8s-version-684625" exists ...
I0217 13:18:17.590522 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
I0217 13:18:17.590914 2295157 config.go:182] Loaded profile config "old-k8s-version-684625": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0217 13:18:17.591063 2295157 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-684625"
I0217 13:18:17.591129 2295157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-684625"
I0217 13:18:17.591531 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
I0217 13:18:17.594337 2295157 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-684625"
I0217 13:18:17.594689 2295157 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-684625"
W0217 13:18:17.594731 2295157 addons.go:247] addon metrics-server should already be in state true
I0217 13:18:17.594816 2295157 host.go:66] Checking if "old-k8s-version-684625" exists ...
I0217 13:18:17.594445 2295157 out.go:177] * Verifying Kubernetes components...
I0217 13:18:17.594542 2295157 addons.go:69] Setting dashboard=true in profile "old-k8s-version-684625"
I0217 13:18:17.595840 2295157 addons.go:238] Setting addon dashboard=true in "old-k8s-version-684625"
W0217 13:18:17.597251 2295157 addons.go:247] addon dashboard should already be in state true
I0217 13:18:17.597431 2295157 host.go:66] Checking if "old-k8s-version-684625" exists ...
I0217 13:18:17.597238 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
I0217 13:18:17.599756 2295157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0217 13:18:17.606987 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
I0217 13:18:17.645729 2295157 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-684625"
W0217 13:18:17.645756 2295157 addons.go:247] addon default-storageclass should already be in state true
I0217 13:18:17.645798 2295157 host.go:66] Checking if "old-k8s-version-684625" exists ...
I0217 13:18:17.646354 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
I0217 13:18:17.688895 2295157 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0217 13:18:17.692040 2295157 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0217 13:18:17.692067 2295157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0217 13:18:17.692142 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:17.717110 2295157 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0217 13:18:17.725751 2295157 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0217 13:18:17.725988 2295157 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0217 13:18:17.726004 2295157 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0217 13:18:17.726117 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:17.732230 2295157 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0217 13:18:17.737832 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0217 13:18:17.737869 2295157 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0217 13:18:17.737973 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:17.744324 2295157 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0217 13:18:17.744357 2295157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0217 13:18:17.744443 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
I0217 13:18:17.770842 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:17.815685 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:17.818938 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:17.821365 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
I0217 13:18:17.899081 2295157 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0217 13:18:17.954967 2295157 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-684625" to be "Ready" ...
I0217 13:18:18.037410 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0217 13:18:18.040878 2295157 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0217 13:18:18.040967 2295157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0217 13:18:18.145087 2295157 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0217 13:18:18.145172 2295157 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0217 13:18:18.150656 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0217 13:18:18.150737 2295157 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0217 13:18:18.173392 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0217 13:18:18.218151 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0217 13:18:18.218235 2295157 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0217 13:18:18.230955 2295157 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0217 13:18:18.231036 2295157 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0217 13:18:18.275081 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0217 13:18:18.275171 2295157 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0217 13:18:18.304123 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0217 13:18:18.363190 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0217 13:18:18.363263 2295157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W0217 13:18:18.401685 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.401758 2295157 retry.go:31] will retry after 150.911569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:18.430068 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.430186 2295157 retry.go:31] will retry after 258.316003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.451499 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0217 13:18:18.451577 2295157 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0217 13:18:18.500481 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0217 13:18:18.500508 2295157 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0217 13:18:18.545345 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0217 13:18:18.545385 2295157 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0217 13:18:18.553801 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0217 13:18:18.569400 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0217 13:18:18.569439 2295157 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0217 13:18:18.585995 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.586038 2295157 retry.go:31] will retry after 292.791338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.628791 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0217 13:18:18.628821 2295157 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0217 13:18:18.689267 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0217 13:18:18.697370 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.697404 2295157 retry.go:31] will retry after 276.99122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.702414 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0217 13:18:18.879854 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0217 13:18:18.909299 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.909336 2295157 retry.go:31] will retry after 514.923774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:18.914967 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.914997 2295157 retry.go:31] will retry after 363.856496ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:18.974907 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0217 13:18:19.052531 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.052645 2295157 retry.go:31] will retry after 511.942409ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:19.169429 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.169518 2295157 retry.go:31] will retry after 358.176208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.279866 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0217 13:18:19.404529 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.404647 2295157 retry.go:31] will retry after 555.062538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.425007 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0217 13:18:19.526129 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.526223 2295157 retry.go:31] will retry after 419.422751ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.528273 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0217 13:18:19.565684 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0217 13:18:19.645935 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.646034 2295157 retry.go:31] will retry after 930.325939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:19.721364 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.721456 2295157 retry.go:31] will retry after 793.821457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:19.945901 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0217 13:18:19.955662 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": dial tcp 192.168.85.2:8443: connect: connection refused
I0217 13:18:19.960017 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0217 13:18:20.068986 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:20.069069 2295157 retry.go:31] will retry after 765.192301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:20.149813 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:20.149912 2295157 retry.go:31] will retry after 590.508182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:20.515911 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0217 13:18:20.577354 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0217 13:18:20.618530 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:20.618610 2295157 retry.go:31] will retry after 1.061356629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:20.701856 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:20.701936 2295157 retry.go:31] will retry after 1.071140655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:20.741110 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0217 13:18:20.834645 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0217 13:18:20.847981 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:20.848060 2295157 retry.go:31] will retry after 951.859228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:20.940956 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:20.941033 2295157 retry.go:31] will retry after 956.309552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:21.680173 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0217 13:18:21.767606 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:21.767634 2295157 retry.go:31] will retry after 1.482756733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:21.773857 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0217 13:18:21.800073 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0217 13:18:21.897517 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0217 13:18:21.923955 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:21.923983 2295157 retry.go:31] will retry after 2.795962653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:22.036271 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:22.036392 2295157 retry.go:31] will retry after 1.118084246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:22.065813 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:22.065843 2295157 retry.go:31] will retry after 1.662572099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:22.455613 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": dial tcp 192.168.85.2:8443: connect: connection refused
I0217 13:18:23.154952 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0217 13:18:23.250866 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0217 13:18:23.254080 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:23.254113 2295157 retry.go:31] will retry after 1.864580743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0217 13:18:23.343309 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:23.343346 2295157 retry.go:31] will retry after 2.811155514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:23.729488 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0217 13:18:23.834695 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:23.834722 2295157 retry.go:31] will retry after 1.967376353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:24.456374 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": dial tcp 192.168.85.2:8443: connect: connection refused
I0217 13:18:24.720931 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0217 13:18:24.821559 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:24.821610 2295157 retry.go:31] will retry after 2.740959084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:25.119782 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0217 13:18:25.218887 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:25.218922 2295157 retry.go:31] will retry after 2.866268131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:25.802895 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0217 13:18:25.890827 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:25.890859 2295157 retry.go:31] will retry after 3.490245305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:26.154712 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0217 13:18:26.354138 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:26.354170 2295157 retry.go:31] will retry after 2.171663456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0217 13:18:26.955587 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": dial tcp 192.168.85.2:8443: connect: connection refused
I0217 13:18:27.563292 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0217 13:18:28.085390 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0217 13:18:28.526026 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0217 13:18:29.381908 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0217 13:18:37.458144 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": net/http: TLS handshake timeout
I0217 13:18:37.903327 2295157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.339994488s)
W0217 13:18:37.903357 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0217 13:18:37.903373 2295157 retry.go:31] will retry after 4.908317628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0217 13:18:38.435344 2295157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.349904947s)
W0217 13:18:38.435391 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0217 13:18:38.435407 2295157 retry.go:31] will retry after 5.696676717s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0217 13:18:38.838321 2295157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.312249521s)
W0217 13:18:38.838353 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0217 13:18:38.838370 2295157 retry.go:31] will retry after 4.583534276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0217 13:18:39.353715 2295157 node_ready.go:49] node "old-k8s-version-684625" has status "Ready":"True"
I0217 13:18:39.353743 2295157 node_ready.go:38] duration metric: took 21.398644051s for node "old-k8s-version-684625" to be "Ready" ...
I0217 13:18:39.353765 2295157 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0217 13:18:39.522946 2295157 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-hbrnk" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.658180 2295157 pod_ready.go:93] pod "coredns-74ff55c5b-hbrnk" in "kube-system" namespace has status "Ready":"True"
I0217 13:18:39.658201 2295157 pod_ready.go:82] duration metric: took 135.214164ms for pod "coredns-74ff55c5b-hbrnk" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.658213 2295157 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.711948 2295157 pod_ready.go:93] pod "etcd-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"True"
I0217 13:18:39.711994 2295157 pod_ready.go:82] duration metric: took 53.771916ms for pod "etcd-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.712016 2295157 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.756772 2295157 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"True"
I0217 13:18:39.756803 2295157 pod_ready.go:82] duration metric: took 44.779045ms for pod "kube-apiserver-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.756816 2295157 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.766763 2295157 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"True"
I0217 13:18:39.766805 2295157 pod_ready.go:82] duration metric: took 9.98055ms for pod "kube-controller-manager-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.766818 2295157 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xhtkg" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.829306 2295157 pod_ready.go:93] pod "kube-proxy-xhtkg" in "kube-system" namespace has status "Ready":"True"
I0217 13:18:39.829329 2295157 pod_ready.go:82] duration metric: took 62.503567ms for pod "kube-proxy-xhtkg" in "kube-system" namespace to be "Ready" ...
I0217 13:18:39.829341 2295157 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
I0217 13:18:40.097818 2295157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.715865457s)
I0217 13:18:41.834948 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:18:42.812211 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0217 13:18:43.423049 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0217 13:18:43.724574 2295157 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-684625"
I0217 13:18:43.835761 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:18:44.133000 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0217 13:18:44.605574 2295157 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-684625 addons enable metrics-server
I0217 13:18:44.608538 2295157 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0217 13:18:44.611712 2295157 addons.go:514] duration metric: took 27.021997868s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0217 13:18:46.336658 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:18:48.834230 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:18:50.834780 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:18:53.334677 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:18:55.835743 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:18:58.348472 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:00.838188 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:02.845514 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:05.335267 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:07.335374 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:09.371597 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:11.834925 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:13.846606 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:16.334805 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:18.835176 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:21.334574 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:23.835435 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:25.835656 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:28.334466 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:30.335742 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:32.337189 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:34.343677 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:36.838662 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:39.336406 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:41.838833 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:44.336721 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:46.835146 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:48.835441 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:50.836645 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:53.335066 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:55.839913 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:58.335313 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
I0217 13:19:59.841082 2295157 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"True"
I0217 13:19:59.841108 2295157 pod_ready.go:82] duration metric: took 1m20.011758848s for pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
I0217 13:19:59.841120 2295157 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace to be "Ready" ...
I0217 13:20:01.846998 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:04.347209 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:06.847289 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:08.847346 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:11.346494 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:13.846673 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:16.347207 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:18.847426 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:21.346549 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:23.346826 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:25.352377 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:27.847604 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:30.346817 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:32.846613 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:34.847062 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:37.347523 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:39.846307 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:41.852050 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:44.346818 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:46.846689 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:48.847082 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:50.847364 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:53.346723 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:55.346762 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:57.846811 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:20:59.846948 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:02.346604 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:04.847068 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:07.346917 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:09.349083 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:11.846598 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:14.347656 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:16.846837 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:19.346455 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:21.347583 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:23.845798 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:25.846840 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:27.846999 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:30.346256 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:32.846763 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:35.346628 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:37.347461 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:39.851961 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:42.347939 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:44.846311 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:46.846946 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:49.347039 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:51.847466 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:54.347444 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:56.847348 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:21:59.346259 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:01.346444 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:03.348141 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:05.847142 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:08.347497 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:10.846362 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:13.346497 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:15.846629 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:17.846959 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:20.346416 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:22.347252 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:24.847048 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:27.346827 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:29.847009 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:32.349273 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:34.847769 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:37.346166 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:39.346814 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:41.347109 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:43.846441 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:45.846759 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:48.346492 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:50.346738 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:52.347184 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:54.847427 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:57.346427 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:22:59.846790 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:01.846951 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:03.847085 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:06.346078 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:08.347146 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:10.846304 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:13.346337 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:15.353730 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:17.846539 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:20.346840 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:22.347359 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:24.846220 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:26.846316 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:28.850223 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:31.346515 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:33.847115 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:36.346645 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:38.346974 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:40.846807 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:42.847299 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:44.848827 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:47.347148 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:49.349279 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:51.847190 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:53.847663 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:55.849015 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:58.346357 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
I0217 13:23:59.847119 2295157 pod_ready.go:82] duration metric: took 4m0.005984559s for pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace to be "Ready" ...
E0217 13:23:59.847208 2295157 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0217 13:23:59.847225 2295157 pod_ready.go:39] duration metric: took 5m20.493442033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0217 13:23:59.847242 2295157 api_server.go:52] waiting for apiserver process to appear ...
I0217 13:23:59.847282 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0217 13:23:59.847357 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0217 13:23:59.885005 2295157 cri.go:89] found id: "1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d"
I0217 13:23:59.885030 2295157 cri.go:89] found id: "b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
I0217 13:23:59.885035 2295157 cri.go:89] found id: ""
I0217 13:23:59.885042 2295157 logs.go:282] 2 containers: [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9]
I0217 13:23:59.885102 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:23:59.888705 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:23:59.892160 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0217 13:23:59.892235 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0217 13:23:59.938246 2295157 cri.go:89] found id: "8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b"
I0217 13:23:59.938267 2295157 cri.go:89] found id: "6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
I0217 13:23:59.938272 2295157 cri.go:89] found id: ""
I0217 13:23:59.938279 2295157 logs.go:282] 2 containers: [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b 6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3]
I0217 13:23:59.938339 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:23:59.941943 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:23:59.945478 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0217 13:23:59.945570 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0217 13:23:59.985037 2295157 cri.go:89] found id: "7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790"
I0217 13:23:59.985058 2295157 cri.go:89] found id: "d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
I0217 13:23:59.985063 2295157 cri.go:89] found id: ""
I0217 13:23:59.985070 2295157 logs.go:282] 2 containers: [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790 d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994]
I0217 13:23:59.985126 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:23:59.988758 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:23:59.992103 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0217 13:23:59.992195 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0217 13:24:00.112402 2295157 cri.go:89] found id: "4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c"
I0217 13:24:00.112852 2295157 cri.go:89] found id: "50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
I0217 13:24:00.112864 2295157 cri.go:89] found id: ""
I0217 13:24:00.112873 2295157 logs.go:282] 2 containers: [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c 50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c]
I0217 13:24:00.112963 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.122975 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.131554 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0217 13:24:00.131656 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0217 13:24:00.250520 2295157 cri.go:89] found id: "8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e"
I0217 13:24:00.250606 2295157 cri.go:89] found id: "b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
I0217 13:24:00.250629 2295157 cri.go:89] found id: ""
I0217 13:24:00.250656 2295157 logs.go:282] 2 containers: [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5]
I0217 13:24:00.250771 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.260928 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.271266 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0217 13:24:00.271517 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0217 13:24:00.341287 2295157 cri.go:89] found id: "153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044"
I0217 13:24:00.341369 2295157 cri.go:89] found id: "eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
I0217 13:24:00.341391 2295157 cri.go:89] found id: ""
I0217 13:24:00.341418 2295157 logs.go:282] 2 containers: [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044 eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9]
I0217 13:24:00.341502 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.346938 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.351739 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0217 13:24:00.351871 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0217 13:24:00.403565 2295157 cri.go:89] found id: "1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d"
I0217 13:24:00.403643 2295157 cri.go:89] found id: "bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
I0217 13:24:00.403664 2295157 cri.go:89] found id: ""
I0217 13:24:00.403690 2295157 logs.go:282] 2 containers: [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767]
I0217 13:24:00.403769 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.408046 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.412192 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0217 13:24:00.412306 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0217 13:24:00.457827 2295157 cri.go:89] found id: "21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42"
I0217 13:24:00.457852 2295157 cri.go:89] found id: ""
I0217 13:24:00.457862 2295157 logs.go:282] 1 containers: [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42]
I0217 13:24:00.457930 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.462436 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0217 13:24:00.462599 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0217 13:24:00.509072 2295157 cri.go:89] found id: "9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6"
I0217 13:24:00.509150 2295157 cri.go:89] found id: "758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f"
I0217 13:24:00.509170 2295157 cri.go:89] found id: ""
I0217 13:24:00.509193 2295157 logs.go:282] 2 containers: [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6 758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f]
I0217 13:24:00.509281 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.513343 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:00.517930 2295157 logs.go:123] Gathering logs for kindnet [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d] ...
I0217 13:24:00.517967 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d"
I0217 13:24:00.564871 2295157 logs.go:123] Gathering logs for container status ...
I0217 13:24:00.564904 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0217 13:24:00.632262 2295157 logs.go:123] Gathering logs for kubelet ...
I0217 13:24:00.632290 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0217 13:24:00.691097 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.309993 661 reflector.go:138] object-"kube-system"/"kindnet-token-vfbnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vfbnq" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:00.691379 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.310267 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-zqt6v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-zqt6v" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:00.691594 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.311034 661 reflector.go:138] object-"default"/"default-token-jrqqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-jrqqq" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:00.691801 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.315771 661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:00.692019 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.316033 661 reflector.go:138] object-"kube-system"/"kube-proxy-token-ghwn6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-ghwn6" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:00.692240 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.317925 661 reflector.go:138] object-"kube-system"/"metrics-server-token-bpn96": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bpn96" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:00.692451 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.318273 661 reflector.go:138] object-"kube-system"/"coredns-token-f6dfc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f6dfc" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:00.692650 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.319367 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:00.701972 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.094609 661 pod_workers.go:191] Error syncing pod db74f299-b905-402c-8142-b6b360bb1ae2 ("kindnet-d7wd6_kube-system(db74f299-b905-402c-8142-b6b360bb1ae2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.704829 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.610279 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:00.706764 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.776479 661 pod_workers.go:191] Error syncing pod db74f299-b905-402c-8142-b6b360bb1ae2 ("kindnet-d7wd6_kube-system(db74f299-b905-402c-8142-b6b360bb1ae2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.706960 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.798146 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.707899 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.848610 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.708717 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.918969 661 pod_workers.go:191] Error syncing pod bff5b8f0-6b85-450b-804b-24e5e32c97ba ("busybox_default(bff5b8f0-6b85-450b-804b-24e5e32c97ba)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.710961 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.210034 661 pod_workers.go:191] Error syncing pod ae0c5ea7-e2c9-427f-bdf2-a284c975e898 ("coredns-74ff55c5b-hbrnk_kube-system(ae0c5ea7-e2c9-427f-bdf2-a284c975e898)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.712739 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.797379 661 pod_workers.go:191] Error syncing pod ae0c5ea7-e2c9-427f-bdf2-a284c975e898 ("coredns-74ff55c5b-hbrnk_kube-system(ae0c5ea7-e2c9-427f-bdf2-a284c975e898)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.713797 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.800620 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.714764 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.803924 661 pod_workers.go:191] Error syncing pod bff5b8f0-6b85-450b-804b-24e5e32c97ba ("busybox_default(bff5b8f0-6b85-450b-804b-24e5e32c97ba)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.716352 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:43 old-k8s-version-684625 kubelet[661]: E0217 13:18:43.302571 661 pod_workers.go:191] Error syncing pod ffc7d812-df4e-4e5f-8523-a049282e4e8a ("kube-proxy-xhtkg_kube-system(ffc7d812-df4e-4e5f-8523-a049282e4e8a)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.717881 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:43 old-k8s-version-684625 kubelet[661]: E0217 13:18:43.837069 661 pod_workers.go:191] Error syncing pod ffc7d812-df4e-4e5f-8523-a049282e4e8a ("kube-proxy-xhtkg_kube-system(ffc7d812-df4e-4e5f-8523-a049282e4e8a)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:00.721039 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:54 old-k8s-version-684625 kubelet[661]: E0217 13:18:54.524274 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:00.723622 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:05 old-k8s-version-684625 kubelet[661]: E0217 13:19:05.936388 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.724082 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:06 old-k8s-version-684625 kubelet[661]: E0217 13:19:06.947715 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.724270 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:08 old-k8s-version-684625 kubelet[661]: E0217 13:19:08.506836 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.724621 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:15 old-k8s-version-684625 kubelet[661]: E0217 13:19:15.014578 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.727386 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:20 old-k8s-version-684625 kubelet[661]: E0217 13:19:20.517133 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:00.727832 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:25 old-k8s-version-684625 kubelet[661]: E0217 13:19:25.001801 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
W0217 13:24:00.728436 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:28 old-k8s-version-684625 kubelet[661]: E0217 13:19:28.017166 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.728627 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:31 old-k8s-version-684625 kubelet[661]: E0217 13:19:31.511330 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.728957 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:35 old-k8s-version-684625 kubelet[661]: E0217 13:19:35.014933 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.729265 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:38 old-k8s-version-684625 kubelet[661]: E0217 13:19:38.507288 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
W0217 13:24:00.729453 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:43 old-k8s-version-684625 kubelet[661]: E0217 13:19:43.507026 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.730055 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:49 old-k8s-version-684625 kubelet[661]: E0217 13:19:49.129541 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.730372 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:52 old-k8s-version-684625 kubelet[661]: E0217 13:19:52.506152 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
W0217 13:24:00.730697 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:55 old-k8s-version-684625 kubelet[661]: E0217 13:19:55.015087 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.730881 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:58 old-k8s-version-684625 kubelet[661]: E0217 13:19:58.506545 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.731335 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:07 old-k8s-version-684625 kubelet[661]: E0217 13:20:07.506178 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.733759 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:13 old-k8s-version-684625 kubelet[661]: E0217 13:20:13.517749 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:00.734087 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:20 old-k8s-version-684625 kubelet[661]: E0217 13:20:20.506171 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.734273 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:24 old-k8s-version-684625 kubelet[661]: E0217 13:20:24.512163 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.734861 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:32 old-k8s-version-684625 kubelet[661]: E0217 13:20:32.259438 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.735185 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:35 old-k8s-version-684625 kubelet[661]: E0217 13:20:35.014985 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.735368 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:39 old-k8s-version-684625 kubelet[661]: E0217 13:20:39.507286 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.735694 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:50 old-k8s-version-684625 kubelet[661]: E0217 13:20:50.506239 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.735879 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:51 old-k8s-version-684625 kubelet[661]: E0217 13:20:51.506540 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.736217 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:01 old-k8s-version-684625 kubelet[661]: E0217 13:21:01.506830 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.736403 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:06 old-k8s-version-684625 kubelet[661]: E0217 13:21:06.508486 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.736727 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:14 old-k8s-version-684625 kubelet[661]: E0217 13:21:14.506262 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.736911 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:20 old-k8s-version-684625 kubelet[661]: E0217 13:21:20.506827 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.737366 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:29 old-k8s-version-684625 kubelet[661]: E0217 13:21:29.507037 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.737569 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:31 old-k8s-version-684625 kubelet[661]: E0217 13:21:31.506506 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.737915 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:40 old-k8s-version-684625 kubelet[661]: E0217 13:21:40.506099 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.740368 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:46 old-k8s-version-684625 kubelet[661]: E0217 13:21:46.514608 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:00.740958 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:53 old-k8s-version-684625 kubelet[661]: E0217 13:21:53.486312 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.741283 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:55 old-k8s-version-684625 kubelet[661]: E0217 13:21:55.014989 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.741467 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:57 old-k8s-version-684625 kubelet[661]: E0217 13:21:57.513924 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.741800 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:05 old-k8s-version-684625 kubelet[661]: E0217 13:22:05.506670 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.741987 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:10 old-k8s-version-684625 kubelet[661]: E0217 13:22:10.506686 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.742316 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:16 old-k8s-version-684625 kubelet[661]: E0217 13:22:16.506151 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.742501 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:21 old-k8s-version-684625 kubelet[661]: E0217 13:22:21.506684 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.742853 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:27 old-k8s-version-684625 kubelet[661]: E0217 13:22:27.511459 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.743038 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:33 old-k8s-version-684625 kubelet[661]: E0217 13:22:33.506664 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.743365 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:40 old-k8s-version-684625 kubelet[661]: E0217 13:22:40.506352 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.743548 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:48 old-k8s-version-684625 kubelet[661]: E0217 13:22:48.506643 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.743876 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:52 old-k8s-version-684625 kubelet[661]: E0217 13:22:52.506276 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.744059 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:02 old-k8s-version-684625 kubelet[661]: E0217 13:23:02.506740 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.744385 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:03 old-k8s-version-684625 kubelet[661]: E0217 13:23:03.506425 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.744713 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511407 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.744898 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511756 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.745083 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:27 old-k8s-version-684625 kubelet[661]: E0217 13:23:27.506707 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.745407 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.745594 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.745930 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:00.746116 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:00.746443 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
I0217 13:24:00.746457 2295157 logs.go:123] Gathering logs for kube-apiserver [b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9] ...
I0217 13:24:00.746474 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
I0217 13:24:00.807721 2295157 logs.go:123] Gathering logs for coredns [d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994] ...
I0217 13:24:00.807772 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
I0217 13:24:00.858844 2295157 logs.go:123] Gathering logs for kube-scheduler [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c] ...
I0217 13:24:00.858931 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c"
I0217 13:24:00.903061 2295157 logs.go:123] Gathering logs for kube-proxy [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e] ...
I0217 13:24:00.903090 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e"
I0217 13:24:00.944378 2295157 logs.go:123] Gathering logs for kube-controller-manager [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044] ...
I0217 13:24:00.944403 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044"
I0217 13:24:01.003006 2295157 logs.go:123] Gathering logs for etcd [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b] ...
I0217 13:24:01.003040 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b"
I0217 13:24:01.046449 2295157 logs.go:123] Gathering logs for coredns [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790] ...
I0217 13:24:01.046545 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790"
I0217 13:24:01.096817 2295157 logs.go:123] Gathering logs for kube-controller-manager [eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9] ...
I0217 13:24:01.096849 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
I0217 13:24:01.179600 2295157 logs.go:123] Gathering logs for kubernetes-dashboard [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42] ...
I0217 13:24:01.179656 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42"
I0217 13:24:01.231305 2295157 logs.go:123] Gathering logs for describe nodes ...
I0217 13:24:01.231336 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0217 13:24:01.424769 2295157 logs.go:123] Gathering logs for kube-apiserver [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d] ...
I0217 13:24:01.424811 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d"
I0217 13:24:01.498259 2295157 logs.go:123] Gathering logs for etcd [6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3] ...
I0217 13:24:01.498310 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
I0217 13:24:01.570184 2295157 logs.go:123] Gathering logs for kube-scheduler [50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c] ...
I0217 13:24:01.570281 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
I0217 13:24:01.637074 2295157 logs.go:123] Gathering logs for storage-provisioner [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6] ...
I0217 13:24:01.637174 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6"
I0217 13:24:01.734430 2295157 logs.go:123] Gathering logs for storage-provisioner [758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f] ...
I0217 13:24:01.734458 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f"
I0217 13:24:01.805516 2295157 logs.go:123] Gathering logs for dmesg ...
I0217 13:24:01.805548 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0217 13:24:01.845813 2295157 logs.go:123] Gathering logs for kube-proxy [b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5] ...
I0217 13:24:01.845842 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
I0217 13:24:01.905384 2295157 logs.go:123] Gathering logs for kindnet [bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767] ...
I0217 13:24:01.905415 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
I0217 13:24:01.948076 2295157 logs.go:123] Gathering logs for containerd ...
I0217 13:24:01.948167 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0217 13:24:02.157193 2295157 out.go:358] Setting ErrFile to fd 2...
I0217 13:24:02.157231 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0217 13:24:02.157304 2295157 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0217 13:24:02.157320 2295157 out.go:270] Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:02.157331 2295157 out.go:270] Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:02.157344 2295157 out.go:270] Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:02.157467 2295157 out.go:270] Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:02.157482 2295157 out.go:270] Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
I0217 13:24:02.157497 2295157 out.go:358] Setting ErrFile to fd 2...
I0217 13:24:02.157507 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 13:24:12.159325 2295157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0217 13:24:12.172465 2295157 api_server.go:72] duration metric: took 5m54.583233373s to wait for apiserver process to appear ...
I0217 13:24:12.172490 2295157 api_server.go:88] waiting for apiserver healthz status ...
I0217 13:24:12.172527 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0217 13:24:12.172585 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0217 13:24:12.247723 2295157 cri.go:89] found id: "1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d"
I0217 13:24:12.247746 2295157 cri.go:89] found id: "b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
I0217 13:24:12.247750 2295157 cri.go:89] found id: ""
I0217 13:24:12.247758 2295157 logs.go:282] 2 containers: [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9]
I0217 13:24:12.247819 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.252444 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.256676 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0217 13:24:12.256749 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0217 13:24:12.301508 2295157 cri.go:89] found id: "8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b"
I0217 13:24:12.301534 2295157 cri.go:89] found id: "6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
I0217 13:24:12.301539 2295157 cri.go:89] found id: ""
I0217 13:24:12.301546 2295157 logs.go:282] 2 containers: [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b 6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3]
I0217 13:24:12.301601 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.305554 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.310251 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0217 13:24:12.310320 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0217 13:24:12.355504 2295157 cri.go:89] found id: "7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790"
I0217 13:24:12.355527 2295157 cri.go:89] found id: "d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
I0217 13:24:12.355532 2295157 cri.go:89] found id: ""
I0217 13:24:12.355539 2295157 logs.go:282] 2 containers: [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790 d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994]
I0217 13:24:12.355609 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.359467 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.363091 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0217 13:24:12.363200 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0217 13:24:12.414645 2295157 cri.go:89] found id: "4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c"
I0217 13:24:12.414718 2295157 cri.go:89] found id: "50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
I0217 13:24:12.414745 2295157 cri.go:89] found id: ""
I0217 13:24:12.414754 2295157 logs.go:282] 2 containers: [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c 50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c]
I0217 13:24:12.414854 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.418862 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.422776 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0217 13:24:12.422860 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0217 13:24:12.465716 2295157 cri.go:89] found id: "8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e"
I0217 13:24:12.465737 2295157 cri.go:89] found id: "b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
I0217 13:24:12.465741 2295157 cri.go:89] found id: ""
I0217 13:24:12.465749 2295157 logs.go:282] 2 containers: [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5]
I0217 13:24:12.465806 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.469723 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.473168 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0217 13:24:12.473244 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0217 13:24:12.512831 2295157 cri.go:89] found id: "153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044"
I0217 13:24:12.512852 2295157 cri.go:89] found id: "eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
I0217 13:24:12.512857 2295157 cri.go:89] found id: ""
I0217 13:24:12.512864 2295157 logs.go:282] 2 containers: [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044 eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9]
I0217 13:24:12.512925 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.516740 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.520320 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0217 13:24:12.520403 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0217 13:24:12.567350 2295157 cri.go:89] found id: "1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d"
I0217 13:24:12.567371 2295157 cri.go:89] found id: "bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
I0217 13:24:12.567376 2295157 cri.go:89] found id: ""
I0217 13:24:12.567383 2295157 logs.go:282] 2 containers: [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767]
I0217 13:24:12.567481 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.571188 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.574684 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0217 13:24:12.574757 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0217 13:24:12.613877 2295157 cri.go:89] found id: "9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6"
I0217 13:24:12.613910 2295157 cri.go:89] found id: "758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f"
I0217 13:24:12.613916 2295157 cri.go:89] found id: ""
I0217 13:24:12.613923 2295157 logs.go:282] 2 containers: [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6 758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f]
I0217 13:24:12.613996 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.617770 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.621355 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0217 13:24:12.621433 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0217 13:24:12.662920 2295157 cri.go:89] found id: "21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42"
I0217 13:24:12.662982 2295157 cri.go:89] found id: ""
I0217 13:24:12.662995 2295157 logs.go:282] 1 containers: [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42]
I0217 13:24:12.663071 2295157 ssh_runner.go:195] Run: which crictl
I0217 13:24:12.666972 2295157 logs.go:123] Gathering logs for storage-provisioner [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6] ...
I0217 13:24:12.667000 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6"
I0217 13:24:12.707293 2295157 logs.go:123] Gathering logs for storage-provisioner [758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f] ...
I0217 13:24:12.707321 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f"
I0217 13:24:12.745206 2295157 logs.go:123] Gathering logs for kubelet ...
I0217 13:24:12.745237 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0217 13:24:12.795062 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.309993 661 reflector.go:138] object-"kube-system"/"kindnet-token-vfbnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vfbnq" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:12.795343 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.310267 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-zqt6v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-zqt6v" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:12.795566 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.311034 661 reflector.go:138] object-"default"/"default-token-jrqqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-jrqqq" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:12.795779 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.315771 661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:12.796024 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.316033 661 reflector.go:138] object-"kube-system"/"kube-proxy-token-ghwn6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-ghwn6" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:12.796251 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.317925 661 reflector.go:138] object-"kube-system"/"metrics-server-token-bpn96": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bpn96" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:12.796466 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.318273 661 reflector.go:138] object-"kube-system"/"coredns-token-f6dfc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f6dfc" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:12.796670 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.319367 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
W0217 13:24:12.806313 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.094609 661 pod_workers.go:191] Error syncing pod db74f299-b905-402c-8142-b6b360bb1ae2 ("kindnet-d7wd6_kube-system(db74f299-b905-402c-8142-b6b360bb1ae2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.808982 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.610279 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:12.810875 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.776479 661 pod_workers.go:191] Error syncing pod db74f299-b905-402c-8142-b6b360bb1ae2 ("kindnet-d7wd6_kube-system(db74f299-b905-402c-8142-b6b360bb1ae2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.811068 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.798146 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.811995 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.848610 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.812842 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.918969 661 pod_workers.go:191] Error syncing pod bff5b8f0-6b85-450b-804b-24e5e32c97ba ("busybox_default(bff5b8f0-6b85-450b-804b-24e5e32c97ba)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.815104 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.210034 661 pod_workers.go:191] Error syncing pod ae0c5ea7-e2c9-427f-bdf2-a284c975e898 ("coredns-74ff55c5b-hbrnk_kube-system(ae0c5ea7-e2c9-427f-bdf2-a284c975e898)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.816924 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.797379 661 pod_workers.go:191] Error syncing pod ae0c5ea7-e2c9-427f-bdf2-a284c975e898 ("coredns-74ff55c5b-hbrnk_kube-system(ae0c5ea7-e2c9-427f-bdf2-a284c975e898)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.818246 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.800620 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.819313 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.803924 661 pod_workers.go:191] Error syncing pod bff5b8f0-6b85-450b-804b-24e5e32c97ba ("busybox_default(bff5b8f0-6b85-450b-804b-24e5e32c97ba)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.820957 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:43 old-k8s-version-684625 kubelet[661]: E0217 13:18:43.302571 661 pod_workers.go:191] Error syncing pod ffc7d812-df4e-4e5f-8523-a049282e4e8a ("kube-proxy-xhtkg_kube-system(ffc7d812-df4e-4e5f-8523-a049282e4e8a)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.822498 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:43 old-k8s-version-684625 kubelet[661]: E0217 13:18:43.837069 661 pod_workers.go:191] Error syncing pod ffc7d812-df4e-4e5f-8523-a049282e4e8a ("kube-proxy-xhtkg_kube-system(ffc7d812-df4e-4e5f-8523-a049282e4e8a)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
W0217 13:24:12.825682 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:54 old-k8s-version-684625 kubelet[661]: E0217 13:18:54.524274 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:12.828225 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:05 old-k8s-version-684625 kubelet[661]: E0217 13:19:05.936388 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.828688 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:06 old-k8s-version-684625 kubelet[661]: E0217 13:19:06.947715 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.828875 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:08 old-k8s-version-684625 kubelet[661]: E0217 13:19:08.506836 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.829207 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:15 old-k8s-version-684625 kubelet[661]: E0217 13:19:15.014578 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.832004 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:20 old-k8s-version-684625 kubelet[661]: E0217 13:19:20.517133 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:12.832446 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:25 old-k8s-version-684625 kubelet[661]: E0217 13:19:25.001801 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
W0217 13:24:12.833038 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:28 old-k8s-version-684625 kubelet[661]: E0217 13:19:28.017166 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.833223 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:31 old-k8s-version-684625 kubelet[661]: E0217 13:19:31.511330 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.833553 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:35 old-k8s-version-684625 kubelet[661]: E0217 13:19:35.014933 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.834136 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:38 old-k8s-version-684625 kubelet[661]: E0217 13:19:38.507288 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
W0217 13:24:12.834337 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:43 old-k8s-version-684625 kubelet[661]: E0217 13:19:43.507026 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.834936 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:49 old-k8s-version-684625 kubelet[661]: E0217 13:19:49.129541 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.835255 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:52 old-k8s-version-684625 kubelet[661]: E0217 13:19:52.506152 661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
W0217 13:24:12.835587 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:55 old-k8s-version-684625 kubelet[661]: E0217 13:19:55.015087 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.835775 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:58 old-k8s-version-684625 kubelet[661]: E0217 13:19:58.506545 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.836272 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:07 old-k8s-version-684625 kubelet[661]: E0217 13:20:07.506178 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.838878 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:13 old-k8s-version-684625 kubelet[661]: E0217 13:20:13.517749 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:12.839240 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:20 old-k8s-version-684625 kubelet[661]: E0217 13:20:20.506171 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.839448 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:24 old-k8s-version-684625 kubelet[661]: E0217 13:20:24.512163 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.840578 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:32 old-k8s-version-684625 kubelet[661]: E0217 13:20:32.259438 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.840923 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:35 old-k8s-version-684625 kubelet[661]: E0217 13:20:35.014985 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.841110 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:39 old-k8s-version-684625 kubelet[661]: E0217 13:20:39.507286 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.841447 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:50 old-k8s-version-684625 kubelet[661]: E0217 13:20:50.506239 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.841636 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:51 old-k8s-version-684625 kubelet[661]: E0217 13:20:51.506540 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.841977 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:01 old-k8s-version-684625 kubelet[661]: E0217 13:21:01.506830 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.842173 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:06 old-k8s-version-684625 kubelet[661]: E0217 13:21:06.508486 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.842503 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:14 old-k8s-version-684625 kubelet[661]: E0217 13:21:14.506262 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.842697 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:20 old-k8s-version-684625 kubelet[661]: E0217 13:21:20.506827 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.843052 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:29 old-k8s-version-684625 kubelet[661]: E0217 13:21:29.507037 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.843241 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:31 old-k8s-version-684625 kubelet[661]: E0217 13:21:31.506506 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.843574 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:40 old-k8s-version-684625 kubelet[661]: E0217 13:21:40.506099 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.846096 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:46 old-k8s-version-684625 kubelet[661]: E0217 13:21:46.514608 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0217 13:24:12.846703 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:53 old-k8s-version-684625 kubelet[661]: E0217 13:21:53.486312 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.847034 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:55 old-k8s-version-684625 kubelet[661]: E0217 13:21:55.014989 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.847224 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:57 old-k8s-version-684625 kubelet[661]: E0217 13:21:57.513924 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.847555 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:05 old-k8s-version-684625 kubelet[661]: E0217 13:22:05.506670 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.847744 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:10 old-k8s-version-684625 kubelet[661]: E0217 13:22:10.506686 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.848077 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:16 old-k8s-version-684625 kubelet[661]: E0217 13:22:16.506151 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.848261 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:21 old-k8s-version-684625 kubelet[661]: E0217 13:22:21.506684 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.848591 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:27 old-k8s-version-684625 kubelet[661]: E0217 13:22:27.511459 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.848779 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:33 old-k8s-version-684625 kubelet[661]: E0217 13:22:33.506664 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.849108 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:40 old-k8s-version-684625 kubelet[661]: E0217 13:22:40.506352 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.849294 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:48 old-k8s-version-684625 kubelet[661]: E0217 13:22:48.506643 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.849623 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:52 old-k8s-version-684625 kubelet[661]: E0217 13:22:52.506276 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.849820 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:02 old-k8s-version-684625 kubelet[661]: E0217 13:23:02.506740 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.850159 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:03 old-k8s-version-684625 kubelet[661]: E0217 13:23:03.506425 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.850490 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511407 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.850675 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511756 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.850860 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:27 old-k8s-version-684625 kubelet[661]: E0217 13:23:27.506707 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.851189 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.851375 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.851749 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.851935 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.852266 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:12.852452 2295157 logs.go:138] Found kubelet problem: Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.506785 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:12.852781 2295157 logs.go:138] Found kubelet problem: Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.507625 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
I0217 13:24:12.852792 2295157 logs.go:123] Gathering logs for dmesg ...
I0217 13:24:12.852807 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0217 13:24:12.871313 2295157 logs.go:123] Gathering logs for describe nodes ...
I0217 13:24:12.871339 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0217 13:24:13.013482 2295157 logs.go:123] Gathering logs for kube-proxy [b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5] ...
I0217 13:24:13.013515 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
I0217 13:24:13.069243 2295157 logs.go:123] Gathering logs for kube-controller-manager [eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9] ...
I0217 13:24:13.069277 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
I0217 13:24:13.142515 2295157 logs.go:123] Gathering logs for etcd [6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3] ...
I0217 13:24:13.142551 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
I0217 13:24:13.188107 2295157 logs.go:123] Gathering logs for coredns [d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994] ...
I0217 13:24:13.188137 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
I0217 13:24:13.244205 2295157 logs.go:123] Gathering logs for kube-scheduler [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c] ...
I0217 13:24:13.244234 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c"
I0217 13:24:13.284869 2295157 logs.go:123] Gathering logs for containerd ...
I0217 13:24:13.284996 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0217 13:24:13.345298 2295157 logs.go:123] Gathering logs for container status ...
I0217 13:24:13.345337 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0217 13:24:13.399215 2295157 logs.go:123] Gathering logs for kube-scheduler [50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c] ...
I0217 13:24:13.399249 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
I0217 13:24:13.450736 2295157 logs.go:123] Gathering logs for kube-proxy [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e] ...
I0217 13:24:13.450767 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e"
I0217 13:24:13.495231 2295157 logs.go:123] Gathering logs for kindnet [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d] ...
I0217 13:24:13.495258 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d"
I0217 13:24:13.547422 2295157 logs.go:123] Gathering logs for kindnet [bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767] ...
I0217 13:24:13.547458 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
I0217 13:24:13.590826 2295157 logs.go:123] Gathering logs for kubernetes-dashboard [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42] ...
I0217 13:24:13.590857 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42"
I0217 13:24:13.632177 2295157 logs.go:123] Gathering logs for kube-apiserver [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d] ...
I0217 13:24:13.632207 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d"
I0217 13:24:13.704939 2295157 logs.go:123] Gathering logs for kube-apiserver [b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9] ...
I0217 13:24:13.704976 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
I0217 13:24:13.778637 2295157 logs.go:123] Gathering logs for etcd [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b] ...
I0217 13:24:13.778676 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b"
I0217 13:24:13.831400 2295157 logs.go:123] Gathering logs for coredns [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790] ...
I0217 13:24:13.831433 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790"
I0217 13:24:13.874217 2295157 logs.go:123] Gathering logs for kube-controller-manager [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044] ...
I0217 13:24:13.874245 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044"
I0217 13:24:13.943790 2295157 out.go:358] Setting ErrFile to fd 2...
I0217 13:24:13.943825 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0217 13:24:13.943907 2295157 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0217 13:24:13.943922 2295157 out.go:270] Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:13.943954 2295157 out.go:270] Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:13.943983 2295157 out.go:270] Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
W0217 13:24:13.943990 2295157 out.go:270] Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.506785 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.506785 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0217 13:24:13.944001 2295157 out.go:270] Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.507625 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.507625 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
I0217 13:24:13.944009 2295157 out.go:358] Setting ErrFile to fd 2...
I0217 13:24:13.944021 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 13:24:23.946621 2295157 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0217 13:24:23.960217 2295157 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0217 13:24:23.967095 2295157 out.go:201]
W0217 13:24:23.971061 2295157 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0217 13:24:23.971104 2295157 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0217 13:24:23.971123 2295157 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0217 13:24:23.971131 2295157 out.go:270] *
*
W0217 13:24:23.972751 2295157 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0217 13:24:23.977978 2295157 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-684625 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-684625
helpers_test.go:235: (dbg) docker inspect old-k8s-version-684625:
-- stdout --
[
{
"Id": "78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8",
"Created": "2025-02-17T13:15:16.832992873Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2295362,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-02-17T13:18:09.108632625Z",
"FinishedAt": "2025-02-17T13:18:07.986084533Z"
},
"Image": "sha256:86f383d95829214691bb905fe90945d8bf2efbbe5a717e0830a616744d143ec9",
"ResolvConfPath": "/var/lib/docker/containers/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8/hostname",
"HostsPath": "/var/lib/docker/containers/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8/hosts",
"LogPath": "/var/lib/docker/containers/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8-json.log",
"Name": "/old-k8s-version-684625",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-684625:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-684625",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/656f40af279bee00d01959a503f12e2832686984a4b4f4d0d850a259ec44683e-init/diff:/var/lib/docker/overlay2/5eaadba9a34de38da1deed5c4698d3c65d1f3362c3f4e979e5616b492b5ac54b/diff",
"MergedDir": "/var/lib/docker/overlay2/656f40af279bee00d01959a503f12e2832686984a4b4f4d0d850a259ec44683e/merged",
"UpperDir": "/var/lib/docker/overlay2/656f40af279bee00d01959a503f12e2832686984a4b4f4d0d850a259ec44683e/diff",
"WorkDir": "/var/lib/docker/overlay2/656f40af279bee00d01959a503f12e2832686984a4b4f4d0d850a259ec44683e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-684625",
"Source": "/var/lib/docker/volumes/old-k8s-version-684625/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-684625",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-684625",
"name.minikube.sigs.k8s.io": "old-k8s-version-684625",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "da4d36c193fe8f255abf5417e46afec793ff35154af12a474fab28ec4aea3e21",
"SandboxKey": "/var/run/docker/netns/da4d36c193fe",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50067"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50068"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50071"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50069"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50070"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-684625": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null,
"NetworkID": "c93d241abfa61e025ce640ad53ad79167f795177b575abdef5a18aa9a5aefda6",
"EndpointID": "6e1c25bd9e3ac46a1378fe8b47dabda3688f60aaead749e475ee1fe13e216cd4",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-684625",
"78c38b595a8d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-684625 -n old-k8s-version-684625
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-684625 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-684625 logs -n 25: (3.69144496s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-717393 | cert-expiration-717393 | jenkins | v1.35.0 | 17 Feb 25 13:14 UTC | 17 Feb 25 13:14 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-461736 | force-systemd-env-461736 | jenkins | v1.35.0 | 17 Feb 25 13:14 UTC | 17 Feb 25 13:14 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-461736 | force-systemd-env-461736 | jenkins | v1.35.0 | 17 Feb 25 13:14 UTC | 17 Feb 25 13:14 UTC |
| start | -p cert-options-592751 | cert-options-592751 | jenkins | v1.35.0 | 17 Feb 25 13:14 UTC | 17 Feb 25 13:15 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-592751 ssh | cert-options-592751 | jenkins | v1.35.0 | 17 Feb 25 13:15 UTC | 17 Feb 25 13:15 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-592751 -- sudo | cert-options-592751 | jenkins | v1.35.0 | 17 Feb 25 13:15 UTC | 17 Feb 25 13:15 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-592751 | cert-options-592751 | jenkins | v1.35.0 | 17 Feb 25 13:15 UTC | 17 Feb 25 13:15 UTC |
| start | -p old-k8s-version-684625 | old-k8s-version-684625 | jenkins | v1.35.0 | 17 Feb 25 13:15 UTC | 17 Feb 25 13:17 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-717393 | cert-expiration-717393 | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:17 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-717393 | cert-expiration-717393 | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:17 UTC |
| addons | enable metrics-server -p old-k8s-version-684625 | old-k8s-version-684625 | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:17 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| start | -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:19 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| stop | -p old-k8s-version-684625 | old-k8s-version-684625 | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:18 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-684625 | old-k8s-version-684625 | jenkins | v1.35.0 | 17 Feb 25 13:18 UTC | 17 Feb 25 13:18 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-684625 | old-k8s-version-684625 | jenkins | v1.35.0 | 17 Feb 25 13:18 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:19 UTC | 17 Feb 25 13:19 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:19 UTC | 17 Feb 25 13:19 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:19 UTC | 17 Feb 25 13:19 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:19 UTC | 17 Feb 25 13:24 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| image | no-preload-695080 image list | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
| delete | -p no-preload-695080 | no-preload-695080 | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
| start | -p embed-certs-652383 | embed-certs-652383 | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/02/17 13:24:20
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.23.4 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0217 13:24:20.836692 2306840 out.go:345] Setting OutFile to fd 1 ...
I0217 13:24:20.836860 2306840 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 13:24:20.836883 2306840 out.go:358] Setting ErrFile to fd 2...
I0217 13:24:20.836905 2306840 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 13:24:20.837155 2306840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
I0217 13:24:20.837629 2306840 out.go:352] Setting JSON to false
I0217 13:24:20.838993 2306840 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":309824,"bootTime":1739488837,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I0217 13:24:20.840826 2306840 start.go:139] virtualization:
I0217 13:24:20.845551 2306840 out.go:177] * [embed-certs-652383] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0217 13:24:20.849134 2306840 out.go:177] - MINIKUBE_LOCATION=20427
I0217 13:24:20.849173 2306840 notify.go:220] Checking for updates...
I0217 13:24:20.855838 2306840 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0217 13:24:20.859153 2306840 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
I0217 13:24:20.862304 2306840 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
I0217 13:24:20.865376 2306840 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0217 13:24:20.868510 2306840 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0217 13:24:20.872689 2306840 config.go:182] Loaded profile config "old-k8s-version-684625": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0217 13:24:20.872814 2306840 driver.go:394] Setting default libvirt URI to qemu:///system
I0217 13:24:20.903691 2306840 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0217 13:24:20.903823 2306840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0217 13:24:20.970881 2306840 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-17 13:24:20.960875209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0217 13:24:20.970992 2306840 docker.go:318] overlay module found
I0217 13:24:20.974323 2306840 out.go:177] * Using the docker driver based on user configuration
I0217 13:24:20.977327 2306840 start.go:297] selected driver: docker
I0217 13:24:20.977344 2306840 start.go:901] validating driver "docker" against <nil>
I0217 13:24:20.977359 2306840 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0217 13:24:20.978201 2306840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0217 13:24:21.033855 2306840 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-17 13:24:21.024188248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0217 13:24:21.034063 2306840 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0217 13:24:21.034302 2306840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0217 13:24:21.037324 2306840 out.go:177] * Using Docker driver with root privileges
I0217 13:24:21.040287 2306840 cni.go:84] Creating CNI manager for ""
I0217 13:24:21.040356 2306840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0217 13:24:21.040369 2306840 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0217 13:24:21.040455 2306840 start.go:340] cluster config:
{Name:embed-certs-652383 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-652383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0217 13:24:21.045401 2306840 out.go:177] * Starting "embed-certs-652383" primary control-plane node in "embed-certs-652383" cluster
I0217 13:24:21.048294 2306840 cache.go:121] Beginning downloading kic base image for docker with containerd
I0217 13:24:21.056462 2306840 out.go:177] * Pulling base image v0.0.46-1739182054-20387 ...
I0217 13:24:21.059573 2306840 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0217 13:24:21.059635 2306840 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
I0217 13:24:21.059650 2306840 cache.go:56] Caching tarball of preloaded images
I0217 13:24:21.059730 2306840 preload.go:172] Found /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0217 13:24:21.059745 2306840 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0217 13:24:21.059854 2306840 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/embed-certs-652383/config.json ...
I0217 13:24:21.059880 2306840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/embed-certs-652383/config.json: {Name:mk61c1932965c859c44b5216cb9678a521748b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0217 13:24:21.059975 2306840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
I0217 13:24:21.080739 2306840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon, skipping pull
I0217 13:24:21.080765 2306840 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad exists in daemon, skipping load
I0217 13:24:21.080784 2306840 cache.go:230] Successfully downloaded all kic artifacts
I0217 13:24:21.080818 2306840 start.go:360] acquireMachinesLock for embed-certs-652383: {Name:mkcc625f379313cf6c4b4962258434670251f4da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0217 13:24:21.080940 2306840 start.go:364] duration metric: took 101.134µs to acquireMachinesLock for "embed-certs-652383"
I0217 13:24:21.080977 2306840 start.go:93] Provisioning new machine with config: &{Name:embed-certs-652383 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-652383 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0217 13:24:21.081047 2306840 start.go:125] createHost starting for "" (driver="docker")
I0217 13:24:23.946621 2295157 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0217 13:24:23.960217 2295157 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0217 13:24:23.967095 2295157 out.go:201]
W0217 13:24:23.971061 2295157 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0217 13:24:23.971104 2295157 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0217 13:24:23.971123 2295157 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0217 13:24:23.971131 2295157 out.go:270] *
W0217 13:24:23.972751 2295157 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0217 13:24:23.977978 2295157 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
a4c0e20b96ef0 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 03bb8f6997fb2 dashboard-metrics-scraper-8d5bb5db8-6p4sg
9743cccc1e113 ba04bb24b9575 4 minutes ago Running storage-provisioner 2 d1fc1eee4cea5 storage-provisioner
21d12e92bdc34 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 cd7daf648a193 kubernetes-dashboard-cd95d586-bpfhq
8d57d7ac631a1 25a5233254979 5 minutes ago Running kube-proxy 1 0b380ce568da1 kube-proxy-xhtkg
025f5094ecf9c 1611cd07b61d5 5 minutes ago Running busybox 1 0f5e23afd2053 busybox
758a5a1373a2d ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 d1fc1eee4cea5 storage-provisioner
1bfdc8d63afe5 ee75e27fff91c 5 minutes ago Running kindnet-cni 1 4863a79d92778 kindnet-d7wd6
7aa43c123ca5c db91994f4ee8f 5 minutes ago Running coredns 1 81b520af2936c coredns-74ff55c5b-hbrnk
1d1af565585c6 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 09215a26df46e kube-apiserver-old-k8s-version-684625
153a58e15e3c4 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 b0c21468fff4b kube-controller-manager-old-k8s-version-684625
4f05943415698 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 da7b90963f8c9 kube-scheduler-old-k8s-version-684625
8aa69534f9958 05b738aa1bc63 5 minutes ago Running etcd 1 78a6d684f6c7b etcd-old-k8s-version-684625
656af4b962d57 1611cd07b61d5 6 minutes ago Exited busybox 0 87ae35fa80299 busybox
d2fbdfba3ef99 db91994f4ee8f 7 minutes ago Exited coredns 0 cdfe1b3f40470 coredns-74ff55c5b-hbrnk
bab8f4d6f0ee4 ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 1d954be6de071 kindnet-d7wd6
b1f911e5c971d 25a5233254979 8 minutes ago Exited kube-proxy 0 17c05ca7096a9 kube-proxy-xhtkg
eb52e41d1f229 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 88ed6c99dd4a8 kube-controller-manager-old-k8s-version-684625
b6ca4124b9d04 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 a79e4e6db6658 kube-apiserver-old-k8s-version-684625
6fb5b4bd5f9ac 05b738aa1bc63 8 minutes ago Exited etcd 0 ebf7d4d1dc213 etcd-old-k8s-version-684625
50badd161aa11 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 095024c3d08f3 kube-scheduler-old-k8s-version-684625
==> containerd <==
Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.509044711Z" level=info msg="CreateContainer within sandbox \"03bb8f6997fb2ddf3aa11304e7d38cc6a7de732702d817967ee8226f4f56252c\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.530365532Z" level=info msg="CreateContainer within sandbox \"03bb8f6997fb2ddf3aa11304e7d38cc6a7de732702d817967ee8226f4f56252c\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\""
Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.531018707Z" level=info msg="StartContainer for \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\""
Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.601311302Z" level=info msg="StartContainer for \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\" returns successfully"
Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.601474547Z" level=info msg="received exit event container_id:\"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\" id:\"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\" pid:3028 exit_status:255 exited_at:{seconds:1739798431 nanos:601173403}"
Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.629002412Z" level=info msg="shim disconnected" id=3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462 namespace=k8s.io
Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.629064523Z" level=warning msg="cleaning up after shim disconnected" id=3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462 namespace=k8s.io
Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.629074163Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 17 13:20:32 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:32.260860223Z" level=info msg="RemoveContainer for \"fee65f05320f0d9e7201f62b22ef81f5b9f93d140110f1e972572efa4b0ad5d1\""
Feb 17 13:20:32 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:32.272975973Z" level=info msg="RemoveContainer for \"fee65f05320f0d9e7201f62b22ef81f5b9f93d140110f1e972572efa4b0ad5d1\" returns successfully"
Feb 17 13:21:46 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:46.507061731Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:21:46 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:46.512048717Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Feb 17 13:21:46 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:46.514139498Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Feb 17 13:21:46 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:46.514177503Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.508371695Z" level=info msg="CreateContainer within sandbox \"03bb8f6997fb2ddf3aa11304e7d38cc6a7de732702d817967ee8226f4f56252c\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.528769095Z" level=info msg="CreateContainer within sandbox \"03bb8f6997fb2ddf3aa11304e7d38cc6a7de732702d817967ee8226f4f56252c\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\""
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.529924591Z" level=info msg="StartContainer for \"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\""
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.592552264Z" level=info msg="StartContainer for \"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\" returns successfully"
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.595616613Z" level=info msg="received exit event container_id:\"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\" id:\"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\" pid:3260 exit_status:255 exited_at:{seconds:1739798512 nanos:594860441}"
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.620687638Z" level=info msg="shim disconnected" id=a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe namespace=k8s.io
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.620778088Z" level=warning msg="cleaning up after shim disconnected" id=a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe namespace=k8s.io
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.620791676Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.634178885Z" level=warning msg="cleanup warnings time=\"2025-02-17T13:21:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 17 13:21:53 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:53.490081848Z" level=info msg="RemoveContainer for \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\""
Feb 17 13:21:53 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:53.496790558Z" level=info msg="RemoveContainer for \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\" returns successfully"
==> coredns [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:50081 - 62598 "HINFO IN 7539283703582014816.3843172251095661444. udp 57 false 512" NOERROR - 0 6.000718692s
[ERROR] plugin/errors: 2 7539283703582014816.3843172251095661444. HINFO: read udp 10.244.0.4:45310->192.168.85.1:53: i/o timeout
[INFO] 127.0.0.1:33984 - 33744 "HINFO IN 7539283703582014816.3843172251095661444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 4.00474647s
[INFO] 127.0.0.1:47217 - 34059 "HINFO IN 7539283703582014816.3843172251095661444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.003237407s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] 127.0.0.1:49819 - 61104 "HINFO IN 7539283703582014816.3843172251095661444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013633894s
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0217 13:19:16.501878 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-17 13:18:46.501319316 +0000 UTC m=+0.093551390) (total time: 30.000460413s):
Trace[939984059]: [30.000460413s] [30.000460413s] END
E0217 13:19:16.501904 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0217 13:19:16.502095 1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-17 13:18:46.501835485 +0000 UTC m=+0.094067560) (total time: 30.000248472s):
Trace[1474941318]: [30.000248472s] [30.000248472s] END
E0217 13:19:16.502101 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0217 13:19:16.502163 1 trace.go:116] Trace[140954425]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-17 13:18:46.495778299 +0000 UTC m=+0.088010374) (total time: 30.006376015s):
Trace[140954425]: [30.006376015s] [30.006376015s] END
E0217 13:19:16.502168 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:44187 - 16515 "HINFO IN 160704174796526169.2174922180872160481. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.03576722s
==> describe nodes <==
Name: old-k8s-version-684625
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-684625
kubernetes.io/os=linux
minikube.k8s.io/commit=d5460083481c20438a5263486cb626e4191c2126
minikube.k8s.io/name=old-k8s-version-684625
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_02_17T13_15_56_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 17 Feb 2025 13:15:52 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-684625
AcquireTime: <unset>
RenewTime: Mon, 17 Feb 2025 13:24:22 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 17 Feb 2025 13:19:32 +0000 Mon, 17 Feb 2025 13:15:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 17 Feb 2025 13:19:32 +0000 Mon, 17 Feb 2025 13:15:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 17 Feb 2025 13:19:32 +0000 Mon, 17 Feb 2025 13:15:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 17 Feb 2025 13:19:32 +0000 Mon, 17 Feb 2025 13:16:12 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-684625
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: 631c480c8f0f434f8d5713e5c84e7653
System UUID: 3bb971ce-bb5d-4937-b0c4-fc32579828e1
Boot ID: f9f324bd-030b-4f03-bce8-fdc4ef2922d9
Kernel Version: 5.15.0-1077-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.25
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m44s
kube-system coredns-74ff55c5b-hbrnk 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m14s
kube-system etcd-old-k8s-version-684625 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m22s
kube-system kindnet-d7wd6 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m14s
kube-system kube-apiserver-old-k8s-version-684625 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m22s
kube-system kube-controller-manager-old-k8s-version-684625 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m22s
kube-system kube-proxy-xhtkg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m14s
kube-system kube-scheduler-old-k8s-version-684625 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m22s
kube-system metrics-server-9975d5f86-bj72q 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m31s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m13s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-6p4sg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m28s
kubernetes-dashboard kubernetes-dashboard-cd95d586-bpfhq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m28s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m42s (x4 over 8m42s) kubelet Node old-k8s-version-684625 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m42s (x5 over 8m42s) kubelet Node old-k8s-version-684625 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m42s (x4 over 8m42s) kubelet Node old-k8s-version-684625 status is now: NodeHasSufficientPID
Normal Starting 8m23s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m23s kubelet Node old-k8s-version-684625 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m23s kubelet Node old-k8s-version-684625 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m23s kubelet Node old-k8s-version-684625 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m22s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m14s kubelet Node old-k8s-version-684625 status is now: NodeReady
Normal Starting 8m13s kube-proxy Starting kube-proxy.
Normal Starting 6m1s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m1s (x8 over 6m1s) kubelet Node old-k8s-version-684625 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m1s (x8 over 6m1s) kubelet Node old-k8s-version-684625 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m1s (x7 over 6m1s) kubelet Node old-k8s-version-684625 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m1s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m29s kube-proxy Starting kube-proxy.
==> dmesg <==
==> etcd [6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3] <==
raft2025/02/17 13:15:45 INFO: 9f0758e1c58a86ed is starting a new election at term 1
raft2025/02/17 13:15:45 INFO: 9f0758e1c58a86ed became candidate at term 2
raft2025/02/17 13:15:45 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
raft2025/02/17 13:15:45 INFO: 9f0758e1c58a86ed became leader at term 2
raft2025/02/17 13:15:45 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2025-02-17 13:15:45.834562 I | etcdserver: setting up the initial cluster version to 3.4
2025-02-17 13:15:45.834887 I | etcdserver: published {Name:old-k8s-version-684625 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2025-02-17 13:15:45.835128 I | embed: ready to serve client requests
2025-02-17 13:15:45.836651 I | embed: serving client requests on 192.168.85.2:2379
2025-02-17 13:15:45.842502 I | embed: ready to serve client requests
2025-02-17 13:15:45.846842 I | embed: serving client requests on 127.0.0.1:2379
2025-02-17 13:15:45.847656 N | etcdserver/membership: set the initial cluster version to 3.4
2025-02-17 13:15:45.916322 I | etcdserver/api: enabled capabilities for version 3.4
2025-02-17 13:16:08.500676 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:16:09.604266 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:16:19.604200 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:16:29.604163 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:16:39.604185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:16:49.604338 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:16:59.604334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:17:09.604282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:17:19.604193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:17:29.604138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:17:39.604381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:17:49.604390 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b] <==
2025-02-17 13:20:17.920818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:20:27.920712 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:20:37.920812 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:20:47.920712 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:20:57.920760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:21:07.920675 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:21:17.920738 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:21:27.920684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:21:37.920752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:21:47.920670 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:21:57.920953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:22:07.920796 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:22:17.920711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:22:27.920657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:22:37.920790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:22:47.920822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:22:57.920807 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:23:07.920660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:23:17.920832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:23:27.920824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:23:37.920741 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:23:47.920935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:23:57.920880 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:24:07.921055 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-17 13:24:17.920894 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
13:24:26 up 3 days, 14:03, 0 users, load average: 0.49, 1.73, 2.39
Linux old-k8s-version-684625 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d] <==
I0217 13:22:24.167710 1 main.go:301] handling current node
I0217 13:22:34.163631 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:22:34.163666 1 main.go:301] handling current node
I0217 13:22:44.167487 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:22:44.167519 1 main.go:301] handling current node
I0217 13:22:54.159595 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:22:54.159647 1 main.go:301] handling current node
I0217 13:23:04.165756 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:23:04.165792 1 main.go:301] handling current node
I0217 13:23:14.165746 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:23:14.165780 1 main.go:301] handling current node
I0217 13:23:24.165751 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:23:24.165788 1 main.go:301] handling current node
I0217 13:23:34.158780 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:23:34.158816 1 main.go:301] handling current node
I0217 13:23:44.166637 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:23:44.166678 1 main.go:301] handling current node
I0217 13:23:54.159243 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:23:54.159282 1 main.go:301] handling current node
I0217 13:24:04.165763 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:24:04.165798 1 main.go:301] handling current node
I0217 13:24:14.167020 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:24:14.167246 1 main.go:301] handling current node
I0217 13:24:24.165735 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:24:24.165768 1 main.go:301] handling current node
==> kindnet [bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767] <==
I0217 13:16:15.733890 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I0217 13:16:16.050721 1 controller.go:361] Starting controller kube-network-policies
I0217 13:16:16.050751 1 controller.go:365] Waiting for informer caches to sync
I0217 13:16:16.050757 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0217 13:16:16.251050 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0217 13:16:16.251079 1 metrics.go:61] Registering metrics
I0217 13:16:16.251315 1 controller.go:401] Syncing nftables rules
I0217 13:16:26.050544 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:16:26.050735 1 main.go:301] handling current node
I0217 13:16:36.050547 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:16:36.050584 1 main.go:301] handling current node
I0217 13:16:46.053945 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:16:46.054002 1 main.go:301] handling current node
I0217 13:16:56.057783 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:16:56.057819 1 main.go:301] handling current node
I0217 13:17:06.059014 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:17:06.059136 1 main.go:301] handling current node
I0217 13:17:16.050815 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:17:16.050852 1 main.go:301] handling current node
I0217 13:17:26.053751 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:17:26.053784 1 main.go:301] handling current node
I0217 13:17:36.053783 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:17:36.053829 1 main.go:301] handling current node
I0217 13:17:46.052556 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0217 13:17:46.052595 1 main.go:301] handling current node
==> kube-apiserver [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d] <==
I0217 13:21:07.214981 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:21:07.215012 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0217 13:21:42.368003 1 handler_proxy.go:102] no RequestInfo found in the context
E0217 13:21:42.368104 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0217 13:21:42.368121 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0217 13:21:45.048783 1 client.go:360] parsed scheme: "passthrough"
I0217 13:21:45.048863 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:21:45.048875 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0217 13:22:27.289605 1 client.go:360] parsed scheme: "passthrough"
I0217 13:22:27.289647 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:22:27.289656 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0217 13:23:01.402539 1 client.go:360] parsed scheme: "passthrough"
I0217 13:23:01.402782 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:23:01.402880 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0217 13:23:39.025108 1 client.go:360] parsed scheme: "passthrough"
I0217 13:23:39.025158 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:23:39.025168 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0217 13:23:40.204093 1 handler_proxy.go:102] no RequestInfo found in the context
E0217 13:23:40.204169 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0217 13:23:40.204310 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0217 13:24:15.580647 1 client.go:360] parsed scheme: "passthrough"
I0217 13:24:15.580701 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:24:15.580722 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9] <==
I0217 13:15:53.153522 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0217 13:15:53.153856 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0217 13:15:53.322201 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0217 13:15:53.334426 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0217 13:15:53.334903 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0217 13:15:53.690522 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0217 13:15:53.742112 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0217 13:15:53.886403 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
I0217 13:15:53.887533 1 controller.go:606] quota admission added evaluator for: endpoints
I0217 13:15:53.891183 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0217 13:15:54.876300 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0217 13:15:55.415858 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0217 13:15:55.477012 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0217 13:16:03.909112 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0217 13:16:12.340576 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0217 13:16:12.506256 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0217 13:16:17.097467 1 client.go:360] parsed scheme: "passthrough"
I0217 13:16:17.097511 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:16:17.097519 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0217 13:16:51.794759 1 client.go:360] parsed scheme: "passthrough"
I0217 13:16:51.794842 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:16:51.794890 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0217 13:17:28.253902 1 client.go:360] parsed scheme: "passthrough"
I0217 13:17:28.253945 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0217 13:17:28.253954 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044] <==
W0217 13:20:03.725231 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0217 13:20:29.770210 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0217 13:20:35.375650 1 request.go:655] Throttling request took 1.047751514s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0217 13:20:36.227112 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0217 13:21:00.272132 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0217 13:21:07.877718 1 request.go:655] Throttling request took 1.047359447s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0217 13:21:08.729497 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0217 13:21:30.774476 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0217 13:21:40.380042 1 request.go:655] Throttling request took 1.048372425s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0217 13:21:41.231456 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0217 13:22:01.276901 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0217 13:22:12.882031 1 request.go:655] Throttling request took 1.048449937s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0217 13:22:13.733392 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0217 13:22:31.778662 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0217 13:22:45.383969 1 request.go:655] Throttling request took 1.048065922s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1?timeout=32s
W0217 13:22:46.235405 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0217 13:23:02.280537 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0217 13:23:17.885764 1 request.go:655] Throttling request took 1.047753683s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1?timeout=32s
W0217 13:23:18.739822 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0217 13:23:32.782562 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0217 13:23:50.390320 1 request.go:655] Throttling request took 1.047969586s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0217 13:23:51.241835 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0217 13:24:03.284334 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0217 13:24:22.892284 1 request.go:655] Throttling request took 1.046970975s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0217 13:24:23.743857 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9] <==
I0217 13:16:12.546084 1 shared_informer.go:247] Caches are synced for resource quota
I0217 13:16:12.549472 1 shared_informer.go:247] Caches are synced for attach detach
I0217 13:16:12.555975 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0217 13:16:12.574515 1 shared_informer.go:247] Caches are synced for PVC protection
I0217 13:16:12.590384 1 shared_informer.go:247] Caches are synced for persistent volume
I0217 13:16:12.590513 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0217 13:16:12.590921 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0217 13:16:12.590950 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0217 13:16:12.590970 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0217 13:16:12.590992 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0217 13:16:12.591042 1 shared_informer.go:247] Caches are synced for resource quota
I0217 13:16:12.634197 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-g6vph"
I0217 13:16:12.717892 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-hbrnk"
I0217 13:16:12.721806 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E0217 13:16:12.776250 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dacb3042-be21-40a8-bf08-b00f12f5856b", ResourceVersion:"281", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63875394956, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20250214-acbabc1a\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b92c60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b92c80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b92ca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b92cc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b92ce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b92d00), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20250214-acbabc1a", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b92d20)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b92d60)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001b7f3e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d1b608), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a36af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000347e30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d1b660)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I0217 13:16:12.971326 1 shared_informer.go:247] Caches are synced for garbage collector
I0217 13:16:12.971350 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0217 13:16:13.021927 1 shared_informer.go:247] Caches are synced for garbage collector
I0217 13:16:14.035377 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0217 13:16:14.078471 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-g6vph"
I0217 13:16:17.328336 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0217 13:17:54.144915 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0217 13:17:54.198623 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
E0217 13:17:54.244456 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
E0217 13:17:54.352740 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e] <==
I0217 13:18:57.745280 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0217 13:18:57.745611 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0217 13:18:57.764994 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0217 13:18:57.765284 1 server_others.go:185] Using iptables Proxier.
I0217 13:18:57.765864 1 server.go:650] Version: v1.20.0
I0217 13:18:57.766565 1 config.go:224] Starting endpoint slice config controller
I0217 13:18:57.766685 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0217 13:18:57.766986 1 config.go:315] Starting service config controller
I0217 13:18:57.767085 1 shared_informer.go:240] Waiting for caches to sync for service config
I0217 13:18:57.866922 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0217 13:18:57.867235 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5] <==
I0217 13:16:13.416476 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0217 13:16:13.416571 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0217 13:16:13.531298 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0217 13:16:13.531446 1 server_others.go:185] Using iptables Proxier.
I0217 13:16:13.531896 1 server.go:650] Version: v1.20.0
I0217 13:16:13.532536 1 config.go:315] Starting service config controller
I0217 13:16:13.532544 1 shared_informer.go:240] Waiting for caches to sync for service config
I0217 13:16:13.532561 1 config.go:224] Starting endpoint slice config controller
I0217 13:16:13.532564 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0217 13:16:13.632643 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0217 13:16:13.632716 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c] <==
I0217 13:18:33.501477 1 serving.go:331] Generated self-signed cert in-memory
I0217 13:18:40.893895 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0217 13:18:40.894800 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0217 13:18:40.895440 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0217 13:18:40.895543 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0217 13:18:40.896019 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0217 13:18:40.896116 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0217 13:18:40.894279 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0217 13:18:40.894303 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0217 13:18:40.995112 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0217 13:18:40.996199 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0217 13:18:40.996359 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
==> kube-scheduler [50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c] <==
I0217 13:15:48.089376 1 serving.go:331] Generated self-signed cert in-memory
W0217 13:15:52.390378 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0217 13:15:52.390796 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0217 13:15:52.390950 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0217 13:15:52.391079 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0217 13:15:52.449343 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0217 13:15:52.449654 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0217 13:15:52.451289 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0217 13:15:52.465379 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0217 13:15:52.465489 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0217 13:15:52.465559 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0217 13:15:52.465624 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0217 13:15:52.473566 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0217 13:15:52.473615 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
I0217 13:15:52.474789 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0217 13:15:52.479377 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0217 13:15:52.488374 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0217 13:15:52.488671 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0217 13:15:52.489171 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0217 13:15:52.489451 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0217 13:15:52.489566 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0217 13:15:53.474304 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0217 13:15:53.479782 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0217 13:15:54.051459 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Feb 17 13:22:40 old-k8s-version-684625 kubelet[661]: E0217 13:22:40.506352 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:22:48 old-k8s-version-684625 kubelet[661]: E0217 13:22:48.506643 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:22:52 old-k8s-version-684625 kubelet[661]: I0217 13:22:52.505923 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
Feb 17 13:22:52 old-k8s-version-684625 kubelet[661]: E0217 13:22:52.506276 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:02 old-k8s-version-684625 kubelet[661]: E0217 13:23:02.506740 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:23:03 old-k8s-version-684625 kubelet[661]: I0217 13:23:03.505874 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
Feb 17 13:23:03 old-k8s-version-684625 kubelet[661]: E0217 13:23:03.506425 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: I0217 13:23:15.510665 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511407 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511756 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:23:27 old-k8s-version-684625 kubelet[661]: E0217 13:23:27.506707 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: I0217 13:23:30.505830 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: I0217 13:23:44.505888 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: I0217 13:23:56.505740 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.506785 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: I0217 13:24:07.507285 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.507625 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:24:20 old-k8s-version-684625 kubelet[661]: I0217 13:24:20.505921 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
Feb 17 13:24:20 old-k8s-version-684625 kubelet[661]: E0217 13:24:20.507003 661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
Feb 17 13:24:21 old-k8s-version-684625 kubelet[661]: E0217 13:24:21.506691 661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42] <==
2025/02/17 13:19:07 Using namespace: kubernetes-dashboard
2025/02/17 13:19:07 Using in-cluster config to connect to apiserver
2025/02/17 13:19:07 Using secret token for csrf signing
2025/02/17 13:19:07 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/02/17 13:19:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/02/17 13:19:07 Successful initial request to the apiserver, version: v1.20.0
2025/02/17 13:19:07 Generating JWE encryption key
2025/02/17 13:19:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/02/17 13:19:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/02/17 13:19:08 Initializing JWE encryption key from synchronized object
2025/02/17 13:19:08 Creating in-cluster Sidecar client
2025/02/17 13:19:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:19:08 Serving insecurely on HTTP port: 9090
2025/02/17 13:19:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:20:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:20:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:21:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:21:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:22:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:22:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:23:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:23:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:24:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/17 13:19:07 Starting overwatch
==> storage-provisioner [758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f] <==
I0217 13:18:54.603472 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0217 13:19:24.605517 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6] <==
I0217 13:20:05.612637 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0217 13:20:05.627722 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0217 13:20:05.627903 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0217 13:20:23.154282 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0217 13:20:23.154727 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-684625_55565854-9ca0-4b19-8f32-bf3332fe1135!
I0217 13:20:23.157726 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c35428c6-272d-42a7-b9d2-a4f0095100b5", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-684625_55565854-9ca0-4b19-8f32-bf3332fe1135 became leader
I0217 13:20:23.257071 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-684625_55565854-9ca0-4b19-8f32-bf3332fe1135!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-684625 -n old-k8s-version-684625
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-684625 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-bj72q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-684625 describe pod metrics-server-9975d5f86-bj72q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-684625 describe pod metrics-server-9975d5f86-bj72q: exit status 1 (108.346314ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-bj72q" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-684625 describe pod metrics-server-9975d5f86-bj72q: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.62s)