=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-018253 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-018253 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m9.175262578s)
-- stdout --
* [old-k8s-version-018253] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20506
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20506-2281/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-2281/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-018253" primary control-plane node in "old-k8s-version-018253" cluster
* Pulling base image v0.0.46-1741860993-20523 ...
* Restarting existing docker container for "old-k8s-version-018253" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-018253 addons enable metrics-server
* Enabled addons: dashboard, metrics-server, storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0401 20:32:08.904109 219087 out.go:345] Setting OutFile to fd 1 ...
I0401 20:32:08.904292 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 20:32:08.904305 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:32:08.904311 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 20:32:08.904569 219087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-2281/.minikube/bin
I0401 20:32:08.905005 219087 out.go:352] Setting JSON to false
I0401 20:32:08.905961 219087 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4474,"bootTime":1743535055,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0401 20:32:08.906024 219087 start.go:139] virtualization:
I0401 20:32:08.911130 219087 out.go:177] * [old-k8s-version-018253] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0401 20:32:08.914422 219087 out.go:177] - MINIKUBE_LOCATION=20506
I0401 20:32:08.914511 219087 notify.go:220] Checking for updates...
I0401 20:32:08.920201 219087 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0401 20:32:08.923361 219087 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20506-2281/kubeconfig
I0401 20:32:08.926323 219087 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-2281/.minikube
I0401 20:32:08.929145 219087 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0401 20:32:08.931973 219087 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0401 20:32:08.935761 219087 config.go:182] Loaded profile config "old-k8s-version-018253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0401 20:32:08.939219 219087 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0401 20:32:08.942198 219087 driver.go:394] Setting default libvirt URI to qemu:///system
I0401 20:32:08.972461 219087 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0401 20:32:08.972579 219087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0401 20:32:09.035302 219087 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-01 20:32:09.025121705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0401 20:32:09.035426 219087 docker.go:318] overlay module found
I0401 20:32:09.038687 219087 out.go:177] * Using the docker driver based on existing profile
I0401 20:32:09.041609 219087 start.go:297] selected driver: docker
I0401 20:32:09.041629 219087 start.go:901] validating driver "docker" against &{Name:old-k8s-version-018253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-018253 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0401 20:32:09.041726 219087 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0401 20:32:09.042458 219087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0401 20:32:09.110344 219087 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-01 20:32:09.100549476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0401 20:32:09.110795 219087 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0401 20:32:09.110831 219087 cni.go:84] Creating CNI manager for ""
I0401 20:32:09.110903 219087 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0401 20:32:09.110960 219087 start.go:340] cluster config:
{Name:old-k8s-version-018253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-018253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0401 20:32:09.114116 219087 out.go:177] * Starting "old-k8s-version-018253" primary control-plane node in "old-k8s-version-018253" cluster
I0401 20:32:09.117026 219087 cache.go:121] Beginning downloading kic base image for docker with containerd
I0401 20:32:09.119891 219087 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0401 20:32:09.122620 219087 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0401 20:32:09.122676 219087 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-2281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0401 20:32:09.122688 219087 cache.go:56] Caching tarball of preloaded images
I0401 20:32:09.122723 219087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0401 20:32:09.122781 219087 preload.go:172] Found /home/jenkins/minikube-integration/20506-2281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0401 20:32:09.122792 219087 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0401 20:32:09.122917 219087 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/config.json ...
I0401 20:32:09.143757 219087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0401 20:32:09.143782 219087 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0401 20:32:09.143797 219087 cache.go:230] Successfully downloaded all kic artifacts
I0401 20:32:09.143820 219087 start.go:360] acquireMachinesLock for old-k8s-version-018253: {Name:mk9fdf1eddfffa98b7536f6e64d4a6529b09e6b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 20:32:09.143893 219087 start.go:364] duration metric: took 45.916µs to acquireMachinesLock for "old-k8s-version-018253"
I0401 20:32:09.143917 219087 start.go:96] Skipping create...Using existing machine configuration
I0401 20:32:09.143928 219087 fix.go:54] fixHost starting:
I0401 20:32:09.144193 219087 cli_runner.go:164] Run: docker container inspect old-k8s-version-018253 --format={{.State.Status}}
I0401 20:32:09.162506 219087 fix.go:112] recreateIfNeeded on old-k8s-version-018253: state=Stopped err=<nil>
W0401 20:32:09.162538 219087 fix.go:138] unexpected machine state, will restart: <nil>
I0401 20:32:09.165837 219087 out.go:177] * Restarting existing docker container for "old-k8s-version-018253" ...
I0401 20:32:09.168658 219087 cli_runner.go:164] Run: docker start old-k8s-version-018253
I0401 20:32:09.467181 219087 cli_runner.go:164] Run: docker container inspect old-k8s-version-018253 --format={{.State.Status}}
I0401 20:32:09.489679 219087 kic.go:430] container "old-k8s-version-018253" state is running.
I0401 20:32:09.490390 219087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-018253
I0401 20:32:09.519717 219087 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/config.json ...
I0401 20:32:09.519944 219087 machine.go:93] provisionDockerMachine start ...
I0401 20:32:09.520013 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:09.551853 219087 main.go:141] libmachine: Using SSH client type: native
I0401 20:32:09.552174 219087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I0401 20:32:09.552184 219087 main.go:141] libmachine: About to run SSH command:
hostname
I0401 20:32:09.553297 219087 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0401 20:32:12.685174 219087 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-018253
I0401 20:32:12.685210 219087 ubuntu.go:169] provisioning hostname "old-k8s-version-018253"
I0401 20:32:12.685272 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:12.703459 219087 main.go:141] libmachine: Using SSH client type: native
I0401 20:32:12.703808 219087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I0401 20:32:12.703828 219087 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-018253 && echo "old-k8s-version-018253" | sudo tee /etc/hostname
I0401 20:32:12.842120 219087 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-018253
I0401 20:32:12.842222 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:12.860731 219087 main.go:141] libmachine: Using SSH client type: native
I0401 20:32:12.861106 219087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I0401 20:32:12.861131 219087 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-018253' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-018253/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-018253' | sudo tee -a /etc/hosts;
fi
fi
I0401 20:32:12.985389 219087 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0401 20:32:12.985417 219087 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-2281/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-2281/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-2281/.minikube}
I0401 20:32:12.985449 219087 ubuntu.go:177] setting up certificates
I0401 20:32:12.985465 219087 provision.go:84] configureAuth start
I0401 20:32:12.985525 219087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-018253
I0401 20:32:13.006016 219087 provision.go:143] copyHostCerts
I0401 20:32:13.006089 219087 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-2281/.minikube/ca.pem, removing ...
I0401 20:32:13.006110 219087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-2281/.minikube/ca.pem
I0401 20:32:13.006208 219087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-2281/.minikube/ca.pem (1078 bytes)
I0401 20:32:13.006322 219087 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-2281/.minikube/cert.pem, removing ...
I0401 20:32:13.006333 219087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-2281/.minikube/cert.pem
I0401 20:32:13.006364 219087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-2281/.minikube/cert.pem (1123 bytes)
I0401 20:32:13.006437 219087 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-2281/.minikube/key.pem, removing ...
I0401 20:32:13.006445 219087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-2281/.minikube/key.pem
I0401 20:32:13.006471 219087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-2281/.minikube/key.pem (1675 bytes)
I0401 20:32:13.006536 219087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-2281/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-018253 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-018253]
I0401 20:32:13.611765 219087 provision.go:177] copyRemoteCerts
I0401 20:32:13.611831 219087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0401 20:32:13.611915 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:13.633167 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:13.725973 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0401 20:32:13.750969 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0401 20:32:13.781898 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0401 20:32:13.807288 219087 provision.go:87] duration metric: took 821.8103ms to configureAuth
I0401 20:32:13.807316 219087 ubuntu.go:193] setting minikube options for container-runtime
I0401 20:32:13.807505 219087 config.go:182] Loaded profile config "old-k8s-version-018253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0401 20:32:13.807518 219087 machine.go:96] duration metric: took 4.287558691s to provisionDockerMachine
I0401 20:32:13.807526 219087 start.go:293] postStartSetup for "old-k8s-version-018253" (driver="docker")
I0401 20:32:13.807536 219087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0401 20:32:13.807592 219087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0401 20:32:13.807645 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:13.824870 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:13.914519 219087 ssh_runner.go:195] Run: cat /etc/os-release
I0401 20:32:13.917859 219087 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0401 20:32:13.917901 219087 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0401 20:32:13.917919 219087 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0401 20:32:13.917929 219087 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0401 20:32:13.917943 219087 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-2281/.minikube/addons for local assets ...
I0401 20:32:13.918000 219087 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-2281/.minikube/files for local assets ...
I0401 20:32:13.918091 219087 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-2281/.minikube/files/etc/ssl/certs/75972.pem -> 75972.pem in /etc/ssl/certs
I0401 20:32:13.918206 219087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0401 20:32:13.926854 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/files/etc/ssl/certs/75972.pem --> /etc/ssl/certs/75972.pem (1708 bytes)
I0401 20:32:13.951218 219087 start.go:296] duration metric: took 143.676793ms for postStartSetup
I0401 20:32:13.951312 219087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0401 20:32:13.951358 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:13.973313 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:14.076072 219087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0401 20:32:14.086903 219087 fix.go:56] duration metric: took 4.94296273s for fixHost
I0401 20:32:14.086931 219087 start.go:83] releasing machines lock for "old-k8s-version-018253", held for 4.943024376s
I0401 20:32:14.087026 219087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-018253
I0401 20:32:14.118910 219087 ssh_runner.go:195] Run: cat /version.json
I0401 20:32:14.118968 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:14.119281 219087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0401 20:32:14.119377 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:14.147208 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:14.163742 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:14.264885 219087 ssh_runner.go:195] Run: systemctl --version
I0401 20:32:14.455086 219087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0401 20:32:14.459542 219087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0401 20:32:14.478215 219087 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0401 20:32:14.478290 219087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0401 20:32:14.487473 219087 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0401 20:32:14.487501 219087 start.go:495] detecting cgroup driver to use...
I0401 20:32:14.487533 219087 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0401 20:32:14.487582 219087 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0401 20:32:14.503182 219087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0401 20:32:14.515899 219087 docker.go:217] disabling cri-docker service (if available) ...
I0401 20:32:14.515970 219087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0401 20:32:14.529299 219087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0401 20:32:14.541953 219087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0401 20:32:14.640053 219087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0401 20:32:14.745634 219087 docker.go:233] disabling docker service ...
I0401 20:32:14.745702 219087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0401 20:32:14.760724 219087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0401 20:32:14.774332 219087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0401 20:32:14.891274 219087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0401 20:32:14.979856 219087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0401 20:32:14.993611 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0401 20:32:15.026835 219087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0401 20:32:15.039022 219087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0401 20:32:15.051705 219087 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0401 20:32:15.051778 219087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0401 20:32:15.064399 219087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0401 20:32:15.077602 219087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0401 20:32:15.088724 219087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0401 20:32:15.100024 219087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0401 20:32:15.110949 219087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0401 20:32:15.122150 219087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0401 20:32:15.136510 219087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0401 20:32:15.145894 219087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0401 20:32:15.240113 219087 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0401 20:32:15.449672 219087 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0401 20:32:15.449750 219087 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0401 20:32:15.453968 219087 start.go:563] Will wait 60s for crictl version
I0401 20:32:15.454051 219087 ssh_runner.go:195] Run: which crictl
I0401 20:32:15.457774 219087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0401 20:32:15.499116 219087 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0401 20:32:15.499200 219087 ssh_runner.go:195] Run: containerd --version
I0401 20:32:15.526995 219087 ssh_runner.go:195] Run: containerd --version
I0401 20:32:15.559944 219087 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
I0401 20:32:15.562871 219087 cli_runner.go:164] Run: docker network inspect old-k8s-version-018253 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0401 20:32:15.581737 219087 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0401 20:32:15.585756 219087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0401 20:32:15.597920 219087 kubeadm.go:883] updating cluster {Name:old-k8s-version-018253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-018253 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0401 20:32:15.598050 219087 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0401 20:32:15.598106 219087 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 20:32:15.641723 219087 containerd.go:627] all images are preloaded for containerd runtime.
I0401 20:32:15.641749 219087 containerd.go:534] Images already preloaded, skipping extraction
I0401 20:32:15.641818 219087 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 20:32:15.680107 219087 containerd.go:627] all images are preloaded for containerd runtime.
I0401 20:32:15.680132 219087 cache_images.go:84] Images are preloaded, skipping loading
I0401 20:32:15.680141 219087 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0401 20:32:15.680247 219087 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-018253 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-018253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0401 20:32:15.680317 219087 ssh_runner.go:195] Run: sudo crictl info
I0401 20:32:15.727583 219087 cni.go:84] Creating CNI manager for ""
I0401 20:32:15.727606 219087 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0401 20:32:15.727617 219087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0401 20:32:15.727646 219087 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-018253 NodeName:old-k8s-version-018253 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0401 20:32:15.727807 219087 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-018253"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0401 20:32:15.727892 219087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0401 20:32:15.737460 219087 binaries.go:44] Found k8s binaries, skipping transfer
I0401 20:32:15.737547 219087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0401 20:32:15.746252 219087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0401 20:32:15.764882 219087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0401 20:32:15.785282 219087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0401 20:32:15.804588 219087 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0401 20:32:15.808223 219087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0401 20:32:15.819479 219087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0401 20:32:15.904633 219087 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0401 20:32:15.919146 219087 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253 for IP: 192.168.76.2
I0401 20:32:15.919212 219087 certs.go:194] generating shared ca certs ...
I0401 20:32:15.919242 219087 certs.go:226] acquiring lock for ca certs: {Name:mk9fe0d3c9420af86b4bae52abd5f6d6b2c4675e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:32:15.919439 219087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-2281/.minikube/ca.key
I0401 20:32:15.919522 219087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-2281/.minikube/proxy-client-ca.key
I0401 20:32:15.919555 219087 certs.go:256] generating profile certs ...
I0401 20:32:15.919675 219087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/client.key
I0401 20:32:15.919766 219087 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/apiserver.key.f4105e7a
I0401 20:32:15.919838 219087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/proxy-client.key
I0401 20:32:15.919975 219087 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/7597.pem (1338 bytes)
W0401 20:32:15.920044 219087 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-2281/.minikube/certs/7597_empty.pem, impossibly tiny 0 bytes
I0401 20:32:15.920078 219087 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca-key.pem (1675 bytes)
I0401 20:32:15.920128 219087 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem (1078 bytes)
I0401 20:32:15.920191 219087 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/cert.pem (1123 bytes)
I0401 20:32:15.920240 219087 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/key.pem (1675 bytes)
I0401 20:32:15.920308 219087 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/files/etc/ssl/certs/75972.pem (1708 bytes)
I0401 20:32:15.921055 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0401 20:32:15.954265 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0401 20:32:15.983108 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0401 20:32:16.013990 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0401 20:32:16.042566 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0401 20:32:16.072194 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0401 20:32:16.100072 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0401 20:32:16.127857 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/old-k8s-version-018253/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0401 20:32:16.155038 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/files/etc/ssl/certs/75972.pem --> /usr/share/ca-certificates/75972.pem (1708 bytes)
I0401 20:32:16.180483 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0401 20:32:16.205318 219087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/certs/7597.pem --> /usr/share/ca-certificates/7597.pem (1338 bytes)
I0401 20:32:16.231090 219087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0401 20:32:16.250682 219087 ssh_runner.go:195] Run: openssl version
I0401 20:32:16.259204 219087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75972.pem && ln -fs /usr/share/ca-certificates/75972.pem /etc/ssl/certs/75972.pem"
I0401 20:32:16.269940 219087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75972.pem
I0401 20:32:16.273580 219087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 1 19:52 /usr/share/ca-certificates/75972.pem
I0401 20:32:16.273640 219087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75972.pem
I0401 20:32:16.281538 219087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75972.pem /etc/ssl/certs/3ec20f2e.0"
I0401 20:32:16.291382 219087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0401 20:32:16.301103 219087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0401 20:32:16.304721 219087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 1 19:45 /usr/share/ca-certificates/minikubeCA.pem
I0401 20:32:16.304798 219087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0401 20:32:16.312252 219087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0401 20:32:16.326892 219087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7597.pem && ln -fs /usr/share/ca-certificates/7597.pem /etc/ssl/certs/7597.pem"
I0401 20:32:16.338193 219087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7597.pem
I0401 20:32:16.341759 219087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 1 19:52 /usr/share/ca-certificates/7597.pem
I0401 20:32:16.341859 219087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7597.pem
I0401 20:32:16.349087 219087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7597.pem /etc/ssl/certs/51391683.0"
I0401 20:32:16.360008 219087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0401 20:32:16.363848 219087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0401 20:32:16.371180 219087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0401 20:32:16.378347 219087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0401 20:32:16.385613 219087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0401 20:32:16.393470 219087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0401 20:32:16.400790 219087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0401 20:32:16.408396 219087 kubeadm.go:392] StartCluster: {Name:old-k8s-version-018253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-018253 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0401 20:32:16.408517 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0401 20:32:16.408584 219087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0401 20:32:16.450470 219087 cri.go:89] found id: "bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:32:16.450495 219087 cri.go:89] found id: "e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:32:16.450501 219087 cri.go:89] found id: "abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:32:16.450504 219087 cri.go:89] found id: "35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:32:16.450507 219087 cri.go:89] found id: "4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:32:16.450512 219087 cri.go:89] found id: "5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:32:16.450515 219087 cri.go:89] found id: "1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:32:16.450518 219087 cri.go:89] found id: "6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:32:16.450521 219087 cri.go:89] found id: ""
I0401 20:32:16.450573 219087 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0401 20:32:16.466634 219087 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-04-01T20:32:16Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0401 20:32:16.466719 219087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0401 20:32:16.475956 219087 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0401 20:32:16.475985 219087 kubeadm.go:593] restartPrimaryControlPlane start ...
I0401 20:32:16.476058 219087 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0401 20:32:16.485558 219087 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0401 20:32:16.486147 219087 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-018253" does not appear in /home/jenkins/minikube-integration/20506-2281/kubeconfig
I0401 20:32:16.486403 219087 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-2281/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-018253" cluster setting kubeconfig missing "old-k8s-version-018253" context setting]
I0401 20:32:16.487189 219087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/kubeconfig: {Name:mkf36432e76eb80fc7384359f87ed1051bb3861b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:32:16.489215 219087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0401 20:32:16.502904 219087 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0401 20:32:16.502945 219087 kubeadm.go:597] duration metric: took 26.951089ms to restartPrimaryControlPlane
I0401 20:32:16.502955 219087 kubeadm.go:394] duration metric: took 94.569184ms to StartCluster
I0401 20:32:16.502972 219087 settings.go:142] acquiring lock: {Name:mke009045444eed25507a29a5243ce88f8891cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:32:16.503045 219087 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20506-2281/kubeconfig
I0401 20:32:16.504080 219087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/kubeconfig: {Name:mkf36432e76eb80fc7384359f87ed1051bb3861b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:32:16.504306 219087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0401 20:32:16.504630 219087 config.go:182] Loaded profile config "old-k8s-version-018253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0401 20:32:16.504680 219087 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0401 20:32:16.504747 219087 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-018253"
I0401 20:32:16.504772 219087 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-018253"
W0401 20:32:16.504779 219087 addons.go:247] addon storage-provisioner should already be in state true
I0401 20:32:16.504802 219087 host.go:66] Checking if "old-k8s-version-018253" exists ...
I0401 20:32:16.505697 219087 cli_runner.go:164] Run: docker container inspect old-k8s-version-018253 --format={{.State.Status}}
I0401 20:32:16.505854 219087 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-018253"
I0401 20:32:16.505882 219087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-018253"
I0401 20:32:16.506128 219087 cli_runner.go:164] Run: docker container inspect old-k8s-version-018253 --format={{.State.Status}}
I0401 20:32:16.506601 219087 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-018253"
I0401 20:32:16.506623 219087 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-018253"
W0401 20:32:16.506630 219087 addons.go:247] addon metrics-server should already be in state true
I0401 20:32:16.506665 219087 host.go:66] Checking if "old-k8s-version-018253" exists ...
I0401 20:32:16.507216 219087 cli_runner.go:164] Run: docker container inspect old-k8s-version-018253 --format={{.State.Status}}
I0401 20:32:16.507984 219087 addons.go:69] Setting dashboard=true in profile "old-k8s-version-018253"
I0401 20:32:16.508007 219087 addons.go:238] Setting addon dashboard=true in "old-k8s-version-018253"
W0401 20:32:16.508014 219087 addons.go:247] addon dashboard should already be in state true
I0401 20:32:16.508050 219087 host.go:66] Checking if "old-k8s-version-018253" exists ...
I0401 20:32:16.508502 219087 cli_runner.go:164] Run: docker container inspect old-k8s-version-018253 --format={{.State.Status}}
I0401 20:32:16.510043 219087 out.go:177] * Verifying Kubernetes components...
I0401 20:32:16.513461 219087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0401 20:32:16.569000 219087 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0401 20:32:16.569268 219087 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0401 20:32:16.572837 219087 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0401 20:32:16.572862 219087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0401 20:32:16.572946 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:16.573131 219087 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0401 20:32:16.573144 219087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0401 20:32:16.573203 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:16.581030 219087 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0401 20:32:16.584052 219087 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0401 20:32:16.587818 219087 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-018253"
W0401 20:32:16.587853 219087 addons.go:247] addon default-storageclass should already be in state true
I0401 20:32:16.587879 219087 host.go:66] Checking if "old-k8s-version-018253" exists ...
I0401 20:32:16.588285 219087 cli_runner.go:164] Run: docker container inspect old-k8s-version-018253 --format={{.State.Status}}
I0401 20:32:16.588433 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0401 20:32:16.588451 219087 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0401 20:32:16.588503 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:16.635676 219087 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0401 20:32:16.635705 219087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0401 20:32:16.635772 219087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-018253
I0401 20:32:16.654517 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:16.658775 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:16.659433 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:16.678457 219087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/old-k8s-version-018253/id_rsa Username:docker}
I0401 20:32:16.695252 219087 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0401 20:32:16.743602 219087 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-018253" to be "Ready" ...
I0401 20:32:16.826208 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0401 20:32:16.826235 219087 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0401 20:32:16.840347 219087 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0401 20:32:16.840372 219087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0401 20:32:16.865997 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0401 20:32:16.878071 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0401 20:32:16.878095 219087 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0401 20:32:16.882715 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0401 20:32:16.916520 219087 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0401 20:32:16.916544 219087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0401 20:32:16.972541 219087 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0401 20:32:16.972566 219087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0401 20:32:16.998034 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0401 20:32:16.998062 219087 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0401 20:32:17.032790 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0401 20:32:17.087241 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.087287 219087 retry.go:31] will retry after 270.525582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.092635 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0401 20:32:17.092667 219087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W0401 20:32:17.122519 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.122563 219087 retry.go:31] will retry after 207.307847ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.124641 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0401 20:32:17.124665 219087 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0401 20:32:17.153134 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0401 20:32:17.153159 219087 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0401 20:32:17.173376 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0401 20:32:17.173401 219087 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0401 20:32:17.200974 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0401 20:32:17.200997 219087 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0401 20:32:17.207561 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.207595 219087 retry.go:31] will retry after 364.411717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.221044 219087 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0401 20:32:17.221076 219087 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0401 20:32:17.239368 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0401 20:32:17.314886 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.314958 219087 retry.go:31] will retry after 134.800384ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.330040 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0401 20:32:17.358520 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0401 20:32:17.440757 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.440825 219087 retry.go:31] will retry after 314.343817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.450161 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0401 20:32:17.461651 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.461725 219087 retry.go:31] will retry after 544.306619ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0401 20:32:17.547041 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.547124 219087 retry.go:31] will retry after 376.171954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.572399 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0401 20:32:17.649989 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.650075 219087 retry.go:31] will retry after 529.019318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.755981 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0401 20:32:17.852347 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.852382 219087 retry.go:31] will retry after 683.874898ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:17.923524 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0401 20:32:18.010512 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0401 20:32:18.010683 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.010702 219087 retry.go:31] will retry after 515.681885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0401 20:32:18.093217 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.093265 219087 retry.go:31] will retry after 679.454445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.179497 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0401 20:32:18.256441 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.256473 219087 retry.go:31] will retry after 697.291208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.526619 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0401 20:32:18.537180 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0401 20:32:18.649929 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.650020 219087 retry.go:31] will retry after 745.280274ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0401 20:32:18.657123 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.657165 219087 retry.go:31] will retry after 622.59401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.744793 219087 node_ready.go:53] error getting node "old-k8s-version-018253": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-018253": dial tcp 192.168.76.2:8443: connect: connection refused
I0401 20:32:18.772923 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0401 20:32:18.852373 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.852450 219087 retry.go:31] will retry after 1.230948549s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:18.954811 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0401 20:32:19.030610 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:19.030642 219087 retry.go:31] will retry after 1.14205917s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:19.280598 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0401 20:32:19.368325 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:19.368408 219087 retry.go:31] will retry after 660.596508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:19.395544 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0401 20:32:19.470895 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:19.470930 219087 retry.go:31] will retry after 1.815158336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:20.030059 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0401 20:32:20.084563 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0401 20:32:20.115779 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:20.115814 219087 retry.go:31] will retry after 2.059749063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:20.172955 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0401 20:32:20.175610 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:20.175641 219087 retry.go:31] will retry after 676.194011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0401 20:32:20.255246 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:20.255281 219087 retry.go:31] will retry after 970.734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:20.852493 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0401 20:32:20.928813 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:20.928891 219087 retry.go:31] will retry after 1.148580243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:21.226330 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0401 20:32:21.245306 219087 node_ready.go:53] error getting node "old-k8s-version-018253": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-018253": dial tcp 192.168.76.2:8443: connect: connection refused
I0401 20:32:21.286620 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0401 20:32:21.316127 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:21.316161 219087 retry.go:31] will retry after 1.808655847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0401 20:32:21.380992 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:21.381042 219087 retry.go:31] will retry after 1.850225128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:22.078182 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0401 20:32:22.155166 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:22.155199 219087 retry.go:31] will retry after 2.511188469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:22.176436 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0401 20:32:22.252529 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:22.252559 219087 retry.go:31] will retry after 1.828402375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:23.125060 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0401 20:32:23.199971 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:23.200002 219087 retry.go:31] will retry after 2.981584086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:23.232259 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0401 20:32:23.306226 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:23.306257 219087 retry.go:31] will retry after 2.437476226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:23.744199 219087 node_ready.go:53] error getting node "old-k8s-version-018253": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-018253": dial tcp 192.168.76.2:8443: connect: connection refused
I0401 20:32:24.081704 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0401 20:32:24.157988 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:24.158018 219087 retry.go:31] will retry after 5.766334122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:24.668194 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0401 20:32:24.762146 219087 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:24.762199 219087 retry.go:31] will retry after 3.632472657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0401 20:32:25.743893 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0401 20:32:26.181785 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0401 20:32:28.394864 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0401 20:32:29.925046 219087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0401 20:32:34.324159 219087 node_ready.go:49] node "old-k8s-version-018253" has status "Ready":"True"
I0401 20:32:34.324181 219087 node_ready.go:38] duration metric: took 17.58053694s for node "old-k8s-version-018253" to be "Ready" ...
I0401 20:32:34.324191 219087 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0401 20:32:34.438940 219087 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-b77sm" in "kube-system" namespace to be "Ready" ...
I0401 20:32:34.497673 219087 pod_ready.go:93] pod "coredns-74ff55c5b-b77sm" in "kube-system" namespace has status "Ready":"True"
I0401 20:32:34.497736 219087 pod_ready.go:82] duration metric: took 58.768984ms for pod "coredns-74ff55c5b-b77sm" in "kube-system" namespace to be "Ready" ...
I0401 20:32:34.497780 219087 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-018253" in "kube-system" namespace to be "Ready" ...
I0401 20:32:34.528881 219087 pod_ready.go:93] pod "etcd-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"True"
I0401 20:32:34.528961 219087 pod_ready.go:82] duration metric: took 31.159293ms for pod "etcd-old-k8s-version-018253" in "kube-system" namespace to be "Ready" ...
I0401 20:32:34.528992 219087 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-018253" in "kube-system" namespace to be "Ready" ...
I0401 20:32:34.562639 219087 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"True"
I0401 20:32:34.562697 219087 pod_ready.go:82] duration metric: took 33.682331ms for pod "kube-apiserver-old-k8s-version-018253" in "kube-system" namespace to be "Ready" ...
I0401 20:32:34.562742 219087 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace to be "Ready" ...
I0401 20:32:35.484809 219087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.740869859s)
I0401 20:32:35.488063 219087 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-018253 addons enable metrics-server
I0401 20:32:35.543993 219087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.362167894s)
I0401 20:32:35.544025 219087 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-018253"
I0401 20:32:35.544084 219087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.149197894s)
I0401 20:32:35.544113 219087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.619046498s)
I0401 20:32:35.555262 219087 out.go:177] * Enabled addons: dashboard, metrics-server, storage-provisioner, default-storageclass
I0401 20:32:35.558219 219087 addons.go:514] duration metric: took 19.053534122s for enable addons: enabled=[dashboard metrics-server storage-provisioner default-storageclass]
I0401 20:32:36.568734 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:38.573162 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:41.068031 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:43.069748 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:45.105287 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:47.578987 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:50.067967 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:52.069435 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:54.072759 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:56.571741 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:32:59.072179 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:01.572027 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:04.077495 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:06.569541 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:08.569887 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:11.075217 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:13.568535 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:16.068245 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:18.068507 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:20.069110 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:22.569004 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:25.067824 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:27.068444 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:29.068686 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:31.568508 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:33.568917 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:36.069225 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:38.568594 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:41.073815 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:43.568324 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:45.575265 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:48.068092 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:50.068446 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:52.069469 219087 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:52.568489 219087 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"True"
I0401 20:33:52.568513 219087 pod_ready.go:82] duration metric: took 1m18.005749519s for pod "kube-controller-manager-old-k8s-version-018253" in "kube-system" namespace to be "Ready" ...
I0401 20:33:52.568526 219087 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mx7v" in "kube-system" namespace to be "Ready" ...
I0401 20:33:52.573024 219087 pod_ready.go:93] pod "kube-proxy-2mx7v" in "kube-system" namespace has status "Ready":"True"
I0401 20:33:52.573050 219087 pod_ready.go:82] duration metric: took 4.51735ms for pod "kube-proxy-2mx7v" in "kube-system" namespace to be "Ready" ...
I0401 20:33:52.573066 219087 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-018253" in "kube-system" namespace to be "Ready" ...
I0401 20:33:52.577261 219087 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-018253" in "kube-system" namespace has status "Ready":"True"
I0401 20:33:52.577288 219087 pod_ready.go:82] duration metric: took 4.214129ms for pod "kube-scheduler-old-k8s-version-018253" in "kube-system" namespace to be "Ready" ...
I0401 20:33:52.577300 219087 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace to be "Ready" ...
I0401 20:33:54.582015 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:56.583045 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:33:59.082479 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:01.084326 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:03.582545 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:05.583276 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:08.091081 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:10.584091 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:13.083771 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:15.084893 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:17.582726 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:19.583131 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:22.082794 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:24.083295 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:26.583206 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:29.083475 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:31.583095 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:34.082906 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:36.583722 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:39.082992 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:41.086693 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:43.583478 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:46.089752 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:48.582569 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:50.583231 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:53.082020 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:55.082787 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:57.082919 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:34:59.583037 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:01.583319 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:04.084036 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:06.084122 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:08.584585 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:11.082510 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:13.083917 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:15.088110 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:17.583639 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:19.583792 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:22.083629 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:24.582479 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:26.583440 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:29.082246 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:31.082924 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:33.083734 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:35.583741 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:38.083620 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:40.583172 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:42.583320 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:44.583671 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:47.082862 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:49.083491 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:51.582487 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:54.084034 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:56.582959 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:35:58.583406 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:00.585529 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:03.083328 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:05.584613 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:08.082228 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:10.083636 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:12.583186 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:15.083996 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:17.582512 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:19.583329 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:22.083337 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:24.083902 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:26.584217 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:29.083148 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:31.083328 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:33.083521 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:35.583821 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:38.083852 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:40.583076 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:43.083858 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:45.085991 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:47.583614 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:49.583751 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:52.083053 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:54.583972 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:57.082889 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:36:59.582978 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:01.583029 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:03.583312 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:06.138848 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:08.582757 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:10.583157 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:13.083788 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:15.084534 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:17.583758 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:19.584203 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:21.585318 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:23.596309 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:26.083348 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:28.585817 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:31.084684 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:33.583140 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:35.584442 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:38.084071 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:40.084260 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:42.084392 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:44.582913 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:46.584576 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:48.585200 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:50.585792 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:52.583380 219087 pod_ready.go:82] duration metric: took 4m0.00606317s for pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace to be "Ready" ...
E0401 20:37:52.583410 219087 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0401 20:37:52.583420 219087 pod_ready.go:39] duration metric: took 5m18.259207958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0401 20:37:52.583436 219087 api_server.go:52] waiting for apiserver process to appear ...
I0401 20:37:52.583481 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0401 20:37:52.583549 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0401 20:37:52.665330 219087 cri.go:89] found id: "aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4"
I0401 20:37:52.665355 219087 cri.go:89] found id: "5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:37:52.665360 219087 cri.go:89] found id: ""
I0401 20:37:52.665368 219087 logs.go:282] 2 containers: [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4 5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee]
I0401 20:37:52.665434 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.678071 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.688183 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0401 20:37:52.688280 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0401 20:37:52.790260 219087 cri.go:89] found id: "ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db"
I0401 20:37:52.790286 219087 cri.go:89] found id: "6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:37:52.790308 219087 cri.go:89] found id: ""
I0401 20:37:52.790316 219087 logs.go:282] 2 containers: [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db 6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94]
I0401 20:37:52.790389 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.797624 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.801888 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0401 20:37:52.801979 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0401 20:37:52.884920 219087 cri.go:89] found id: "39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50"
I0401 20:37:52.884963 219087 cri.go:89] found id: "bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:37:52.884968 219087 cri.go:89] found id: ""
I0401 20:37:52.884975 219087 logs.go:282] 2 containers: [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50 bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73]
I0401 20:37:52.885039 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.892323 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.901426 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0401 20:37:52.901512 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0401 20:37:52.973562 219087 cri.go:89] found id: "c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c"
I0401 20:37:52.973587 219087 cri.go:89] found id: "1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:37:52.973592 219087 cri.go:89] found id: ""
I0401 20:37:52.973599 219087 logs.go:282] 2 containers: [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c 1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba]
I0401 20:37:52.973667 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.977678 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.985685 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0401 20:37:52.985764 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0401 20:37:53.067543 219087 cri.go:89] found id: "48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0"
I0401 20:37:53.067565 219087 cri.go:89] found id: "35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:37:53.067570 219087 cri.go:89] found id: ""
I0401 20:37:53.067577 219087 logs.go:282] 2 containers: [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0 35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428]
I0401 20:37:53.067633 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.072163 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.078298 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0401 20:37:53.078378 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0401 20:37:53.138643 219087 cri.go:89] found id: "28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593"
I0401 20:37:53.138669 219087 cri.go:89] found id: "4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:37:53.138676 219087 cri.go:89] found id: ""
I0401 20:37:53.138683 219087 logs.go:282] 2 containers: [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593 4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff]
I0401 20:37:53.138742 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.144221 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.153619 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0401 20:37:53.153699 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0401 20:37:53.215208 219087 cri.go:89] found id: "094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4"
I0401 20:37:53.215235 219087 cri.go:89] found id: "e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:37:53.215247 219087 cri.go:89] found id: ""
I0401 20:37:53.215256 219087 logs.go:282] 2 containers: [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4 e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a]
I0401 20:37:53.215340 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.221590 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.225539 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0401 20:37:53.225621 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0401 20:37:53.294839 219087 cri.go:89] found id: "ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8"
I0401 20:37:53.294871 219087 cri.go:89] found id: "abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:37:53.294876 219087 cri.go:89] found id: ""
I0401 20:37:53.294884 219087 logs.go:282] 2 containers: [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8 abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829]
I0401 20:37:53.294950 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.301523 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.312458 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0401 20:37:53.312543 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0401 20:37:53.410976 219087 cri.go:89] found id: "7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4"
I0401 20:37:53.411039 219087 cri.go:89] found id: ""
I0401 20:37:53.411061 219087 logs.go:282] 1 containers: [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4]
I0401 20:37:53.411155 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.414823 219087 logs.go:123] Gathering logs for kube-apiserver [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4] ...
I0401 20:37:53.414889 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4"
I0401 20:37:53.549012 219087 logs.go:123] Gathering logs for etcd [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db] ...
I0401 20:37:53.549088 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db"
I0401 20:37:53.626458 219087 logs.go:123] Gathering logs for kindnet [e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a] ...
I0401 20:37:53.626489 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:37:53.709958 219087 logs.go:123] Gathering logs for container status ...
I0401 20:37:53.709992 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0401 20:37:53.840358 219087 logs.go:123] Gathering logs for coredns [bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73] ...
I0401 20:37:53.840438 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:37:53.913031 219087 logs.go:123] Gathering logs for kube-proxy [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0] ...
I0401 20:37:53.913059 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0"
I0401 20:37:53.975055 219087 logs.go:123] Gathering logs for storage-provisioner [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8] ...
I0401 20:37:53.975125 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8"
I0401 20:37:54.058157 219087 logs.go:123] Gathering logs for storage-provisioner [abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829] ...
I0401 20:37:54.058231 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:37:54.140175 219087 logs.go:123] Gathering logs for kubernetes-dashboard [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4] ...
I0401 20:37:54.140245 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4"
I0401 20:37:54.206010 219087 logs.go:123] Gathering logs for containerd ...
I0401 20:37:54.206082 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0401 20:37:54.269170 219087 logs.go:123] Gathering logs for dmesg ...
I0401 20:37:54.269248 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0401 20:37:54.287600 219087 logs.go:123] Gathering logs for coredns [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50] ...
I0401 20:37:54.287626 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50"
I0401 20:37:54.340376 219087 logs.go:123] Gathering logs for kube-scheduler [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c] ...
I0401 20:37:54.340408 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c"
I0401 20:37:54.420992 219087 logs.go:123] Gathering logs for kube-scheduler [1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba] ...
I0401 20:37:54.421065 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:37:54.493191 219087 logs.go:123] Gathering logs for kube-proxy [35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428] ...
I0401 20:37:54.493263 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:37:54.544925 219087 logs.go:123] Gathering logs for kube-controller-manager [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593] ...
I0401 20:37:54.545011 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593"
I0401 20:37:54.621163 219087 logs.go:123] Gathering logs for kubelet ...
I0401 20:37:54.621244 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0401 20:37:54.687642 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.165613 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-kt7v7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-kt7v7" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:37:54.687935 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.165926 661 reflector.go:138] object-"kube-system"/"kindnet-token-4l6xv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-4l6xv" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:37:54.688174 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.166144 661 reflector.go:138] object-"default"/"default-token-8nfw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8nfw5" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:37:54.688407 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.166422 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:37:54.698160 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:35 old-k8s-version-018253 kubelet[661]: E0401 20:32:35.405325 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.698758 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:35 old-k8s-version-018253 kubelet[661]: E0401 20:32:35.732359 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.701664 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:49 old-k8s-version-018253 kubelet[661]: E0401 20:32:49.452865 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.704538 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:01 old-k8s-version-018253 kubelet[661]: E0401 20:33:01.460208 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.705624 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:07 old-k8s-version-018253 kubelet[661]: E0401 20:33:07.937980 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.705993 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:08 old-k8s-version-018253 kubelet[661]: E0401 20:33:08.943923 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.706362 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:09 old-k8s-version-018253 kubelet[661]: E0401 20:33:09.951616 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.708860 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:13 old-k8s-version-018253 kubelet[661]: E0401 20:33:13.447207 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.709847 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:24 old-k8s-version-018253 kubelet[661]: E0401 20:33:24.988675 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.710326 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:26 old-k8s-version-018253 kubelet[661]: E0401 20:33:26.441570 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.710687 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:28 old-k8s-version-018253 kubelet[661]: E0401 20:33:28.629070 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.711044 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:39 old-k8s-version-018253 kubelet[661]: E0401 20:33:39.437052 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.711257 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:41 old-k8s-version-018253 kubelet[661]: E0401 20:33:41.441575 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.711878 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:52 old-k8s-version-018253 kubelet[661]: E0401 20:33:52.087801 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.714396 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:56 old-k8s-version-018253 kubelet[661]: E0401 20:33:56.457822 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.714760 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:58 old-k8s-version-018253 kubelet[661]: E0401 20:33:58.629551 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.714972 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:09 old-k8s-version-018253 kubelet[661]: E0401 20:34:09.437914 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.715330 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:10 old-k8s-version-018253 kubelet[661]: E0401 20:34:10.440924 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.715548 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:21 old-k8s-version-018253 kubelet[661]: E0401 20:34:21.437543 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.715903 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:25 old-k8s-version-018253 kubelet[661]: E0401 20:34:25.436995 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.716120 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:36 old-k8s-version-018253 kubelet[661]: E0401 20:34:36.438242 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.716739 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:41 old-k8s-version-018253 kubelet[661]: E0401 20:34:41.247792 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.717117 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:48 old-k8s-version-018253 kubelet[661]: E0401 20:34:48.629699 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.717334 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:49 old-k8s-version-018253 kubelet[661]: E0401 20:34:49.437577 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.717693 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:00 old-k8s-version-018253 kubelet[661]: E0401 20:35:00.437297 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.717908 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:01 old-k8s-version-018253 kubelet[661]: E0401 20:35:01.437716 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.718286 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:11 old-k8s-version-018253 kubelet[661]: E0401 20:35:11.437081 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.718846 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:15 old-k8s-version-018253 kubelet[661]: E0401 20:35:15.437535 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.719214 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:25 old-k8s-version-018253 kubelet[661]: E0401 20:35:25.437035 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.721857 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:28 old-k8s-version-018253 kubelet[661]: E0401 20:35:28.447976 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.722203 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:37 old-k8s-version-018253 kubelet[661]: E0401 20:35:37.437132 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.722391 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:41 old-k8s-version-018253 kubelet[661]: E0401 20:35:41.437697 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.722804 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:49 old-k8s-version-018253 kubelet[661]: E0401 20:35:49.437570 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.722991 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:56 old-k8s-version-018253 kubelet[661]: E0401 20:35:56.438417 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.723581 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:04 old-k8s-version-018253 kubelet[661]: E0401 20:36:04.499741 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.723908 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:08 old-k8s-version-018253 kubelet[661]: E0401 20:36:08.629509 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.724090 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:10 old-k8s-version-018253 kubelet[661]: E0401 20:36:10.437611 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.724415 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:19 old-k8s-version-018253 kubelet[661]: E0401 20:36:19.437200 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.724601 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:22 old-k8s-version-018253 kubelet[661]: E0401 20:36:22.437564 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.724925 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:30 old-k8s-version-018253 kubelet[661]: E0401 20:36:30.437610 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.725168 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:35 old-k8s-version-018253 kubelet[661]: E0401 20:36:35.437428 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.725525 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:45 old-k8s-version-018253 kubelet[661]: E0401 20:36:45.437026 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.725738 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:48 old-k8s-version-018253 kubelet[661]: E0401 20:36:48.437377 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.726094 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:57 old-k8s-version-018253 kubelet[661]: E0401 20:36:57.437066 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.726316 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:02 old-k8s-version-018253 kubelet[661]: E0401 20:37:02.437440 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.726671 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:10 old-k8s-version-018253 kubelet[661]: E0401 20:37:10.437686 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.726884 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:15 old-k8s-version-018253 kubelet[661]: E0401 20:37:15.437747 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.727238 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:23 old-k8s-version-018253 kubelet[661]: E0401 20:37:23.437103 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.727525 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:27 old-k8s-version-018253 kubelet[661]: E0401 20:37:27.437416 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.727897 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: E0401 20:37:35.438272 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.728110 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.728565 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.728784 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0401 20:37:54.728817 219087 logs.go:123] Gathering logs for describe nodes ...
I0401 20:37:54.728847 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0401 20:37:54.965470 219087 logs.go:123] Gathering logs for kube-apiserver [5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee] ...
I0401 20:37:54.965547 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:37:55.101847 219087 logs.go:123] Gathering logs for etcd [6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94] ...
I0401 20:37:55.101932 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:37:55.185508 219087 logs.go:123] Gathering logs for kube-controller-manager [4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff] ...
I0401 20:37:55.185636 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:37:55.272998 219087 logs.go:123] Gathering logs for kindnet [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4] ...
I0401 20:37:55.273072 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4"
I0401 20:37:55.334670 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:37:55.334812 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0401 20:37:55.334896 219087 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0401 20:37:55.335641 219087 out.go:270] Apr 01 20:37:27 old-k8s-version-018253 kubelet[661]: E0401 20:37:27.437416 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:27 old-k8s-version-018253 kubelet[661]: E0401 20:37:27.437416 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:55.335782 219087 out.go:270] Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: E0401 20:37:35.438272 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: E0401 20:37:35.438272 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:55.335827 219087 out.go:270] Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:55.335910 219087 out.go:270] Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:55.335943 219087 out.go:270] Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0401 20:37:55.335989 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:37:55.336018 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 20:38:05.339774 219087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0401 20:38:05.356670 219087 api_server.go:72] duration metric: took 5m48.852325321s to wait for apiserver process to appear ...
I0401 20:38:05.356694 219087 api_server.go:88] waiting for apiserver healthz status ...
I0401 20:38:05.356731 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0401 20:38:05.356790 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0401 20:38:05.412265 219087 cri.go:89] found id: "aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4"
I0401 20:38:05.412286 219087 cri.go:89] found id: "5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:38:05.412291 219087 cri.go:89] found id: ""
I0401 20:38:05.412298 219087 logs.go:282] 2 containers: [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4 5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee]
I0401 20:38:05.412361 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.420695 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.424671 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0401 20:38:05.424741 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0401 20:38:05.495927 219087 cri.go:89] found id: "ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db"
I0401 20:38:05.495990 219087 cri.go:89] found id: "6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:38:05.496010 219087 cri.go:89] found id: ""
I0401 20:38:05.496035 219087 logs.go:282] 2 containers: [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db 6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94]
I0401 20:38:05.496128 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.502060 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.508650 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0401 20:38:05.508722 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0401 20:38:05.575950 219087 cri.go:89] found id: "39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50"
I0401 20:38:05.575971 219087 cri.go:89] found id: "bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:38:05.575976 219087 cri.go:89] found id: ""
I0401 20:38:05.575984 219087 logs.go:282] 2 containers: [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50 bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73]
I0401 20:38:05.576044 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.580291 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.584219 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0401 20:38:05.584371 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0401 20:38:05.656471 219087 cri.go:89] found id: "c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c"
I0401 20:38:05.656545 219087 cri.go:89] found id: "1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:38:05.656565 219087 cri.go:89] found id: ""
I0401 20:38:05.656589 219087 logs.go:282] 2 containers: [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c 1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba]
I0401 20:38:05.656685 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.661249 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.667341 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0401 20:38:05.667458 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0401 20:38:05.809481 219087 cri.go:89] found id: "48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0"
I0401 20:38:05.809570 219087 cri.go:89] found id: "35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:38:05.809590 219087 cri.go:89] found id: ""
I0401 20:38:05.809614 219087 logs.go:282] 2 containers: [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0 35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428]
I0401 20:38:05.809719 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.813818 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.818602 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0401 20:38:05.818743 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0401 20:38:05.896895 219087 cri.go:89] found id: "28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593"
I0401 20:38:05.896984 219087 cri.go:89] found id: "4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:38:05.897004 219087 cri.go:89] found id: ""
I0401 20:38:05.897028 219087 logs.go:282] 2 containers: [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593 4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff]
I0401 20:38:05.897179 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.901154 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.905063 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0401 20:38:05.905252 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0401 20:38:05.975652 219087 cri.go:89] found id: "094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4"
I0401 20:38:05.975729 219087 cri.go:89] found id: "e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:38:05.975748 219087 cri.go:89] found id: ""
I0401 20:38:05.975772 219087 logs.go:282] 2 containers: [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4 e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a]
I0401 20:38:05.975859 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.980325 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.984586 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0401 20:38:05.984728 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0401 20:38:06.065357 219087 cri.go:89] found id: "7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4"
I0401 20:38:06.065424 219087 cri.go:89] found id: ""
I0401 20:38:06.065455 219087 logs.go:282] 1 containers: [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4]
I0401 20:38:06.065547 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:06.071786 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0401 20:38:06.071956 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0401 20:38:06.135401 219087 cri.go:89] found id: "ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8"
I0401 20:38:06.135538 219087 cri.go:89] found id: "abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:38:06.135559 219087 cri.go:89] found id: ""
I0401 20:38:06.135592 219087 logs.go:282] 2 containers: [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8 abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829]
I0401 20:38:06.135730 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:06.142152 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:06.149630 219087 logs.go:123] Gathering logs for dmesg ...
I0401 20:38:06.149703 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0401 20:38:06.179676 219087 logs.go:123] Gathering logs for etcd [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db] ...
I0401 20:38:06.179755 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db"
I0401 20:38:06.266802 219087 logs.go:123] Gathering logs for kube-controller-manager [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593] ...
I0401 20:38:06.266873 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593"
I0401 20:38:06.360162 219087 logs.go:123] Gathering logs for kube-controller-manager [4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff] ...
I0401 20:38:06.360242 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:38:06.450106 219087 logs.go:123] Gathering logs for container status ...
I0401 20:38:06.450139 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0401 20:38:06.536857 219087 logs.go:123] Gathering logs for kube-scheduler [1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba] ...
I0401 20:38:06.536886 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:38:06.590418 219087 logs.go:123] Gathering logs for kube-proxy [35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428] ...
I0401 20:38:06.590449 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:38:06.643042 219087 logs.go:123] Gathering logs for describe nodes ...
I0401 20:38:06.643081 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0401 20:38:06.917656 219087 logs.go:123] Gathering logs for etcd [6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94] ...
I0401 20:38:06.917687 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:38:06.988674 219087 logs.go:123] Gathering logs for coredns [bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73] ...
I0401 20:38:06.988708 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:38:07.086393 219087 logs.go:123] Gathering logs for kindnet [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4] ...
I0401 20:38:07.086426 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4"
I0401 20:38:07.169164 219087 logs.go:123] Gathering logs for storage-provisioner [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8] ...
I0401 20:38:07.169269 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8"
I0401 20:38:07.251767 219087 logs.go:123] Gathering logs for containerd ...
I0401 20:38:07.251795 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0401 20:38:07.324610 219087 logs.go:123] Gathering logs for kube-proxy [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0] ...
I0401 20:38:07.324647 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0"
I0401 20:38:07.390912 219087 logs.go:123] Gathering logs for kindnet [e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a] ...
I0401 20:38:07.390941 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:38:07.439773 219087 logs.go:123] Gathering logs for kubernetes-dashboard [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4] ...
I0401 20:38:07.439807 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4"
I0401 20:38:07.506054 219087 logs.go:123] Gathering logs for storage-provisioner [abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829] ...
I0401 20:38:07.506099 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:38:07.563582 219087 logs.go:123] Gathering logs for kubelet ...
I0401 20:38:07.563617 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0401 20:38:07.627171 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.165613 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-kt7v7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-kt7v7" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:38:07.627435 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.165926 661 reflector.go:138] object-"kube-system"/"kindnet-token-4l6xv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-4l6xv" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:38:07.627671 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.166144 661 reflector.go:138] object-"default"/"default-token-8nfw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8nfw5" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:38:07.627897 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.166422 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:38:07.636773 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:35 old-k8s-version-018253 kubelet[661]: E0401 20:32:35.405325 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.637337 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:35 old-k8s-version-018253 kubelet[661]: E0401 20:32:35.732359 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.640140 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:49 old-k8s-version-018253 kubelet[661]: E0401 20:32:49.452865 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.641970 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:01 old-k8s-version-018253 kubelet[661]: E0401 20:33:01.460208 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.642932 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:07 old-k8s-version-018253 kubelet[661]: E0401 20:33:07.937980 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.643287 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:08 old-k8s-version-018253 kubelet[661]: E0401 20:33:08.943923 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.643643 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:09 old-k8s-version-018253 kubelet[661]: E0401 20:33:09.951616 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.646367 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:13 old-k8s-version-018253 kubelet[661]: E0401 20:33:13.447207 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.647329 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:24 old-k8s-version-018253 kubelet[661]: E0401 20:33:24.988675 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.647771 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:26 old-k8s-version-018253 kubelet[661]: E0401 20:33:26.441570 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.648125 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:28 old-k8s-version-018253 kubelet[661]: E0401 20:33:28.629070 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.648479 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:39 old-k8s-version-018253 kubelet[661]: E0401 20:33:39.437052 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.648749 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:41 old-k8s-version-018253 kubelet[661]: E0401 20:33:41.441575 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.649481 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:52 old-k8s-version-018253 kubelet[661]: E0401 20:33:52.087801 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.652404 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:56 old-k8s-version-018253 kubelet[661]: E0401 20:33:56.457822 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.652789 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:58 old-k8s-version-018253 kubelet[661]: E0401 20:33:58.629551 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.653059 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:09 old-k8s-version-018253 kubelet[661]: E0401 20:34:09.437914 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.653438 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:10 old-k8s-version-018253 kubelet[661]: E0401 20:34:10.440924 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.653755 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:21 old-k8s-version-018253 kubelet[661]: E0401 20:34:21.437543 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.654175 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:25 old-k8s-version-018253 kubelet[661]: E0401 20:34:25.436995 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.654371 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:36 old-k8s-version-018253 kubelet[661]: E0401 20:34:36.438242 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.655026 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:41 old-k8s-version-018253 kubelet[661]: E0401 20:34:41.247792 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.655387 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:48 old-k8s-version-018253 kubelet[661]: E0401 20:34:48.629699 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.655577 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:49 old-k8s-version-018253 kubelet[661]: E0401 20:34:49.437577 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.655906 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:00 old-k8s-version-018253 kubelet[661]: E0401 20:35:00.437297 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.656096 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:01 old-k8s-version-018253 kubelet[661]: E0401 20:35:01.437716 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.656462 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:11 old-k8s-version-018253 kubelet[661]: E0401 20:35:11.437081 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.656691 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:15 old-k8s-version-018253 kubelet[661]: E0401 20:35:15.437535 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.657055 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:25 old-k8s-version-018253 kubelet[661]: E0401 20:35:25.437035 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.659877 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:28 old-k8s-version-018253 kubelet[661]: E0401 20:35:28.447976 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.660251 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:37 old-k8s-version-018253 kubelet[661]: E0401 20:35:37.437132 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.660461 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:41 old-k8s-version-018253 kubelet[661]: E0401 20:35:41.437697 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.660815 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:49 old-k8s-version-018253 kubelet[661]: E0401 20:35:49.437570 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.661040 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:56 old-k8s-version-018253 kubelet[661]: E0401 20:35:56.438417 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.661656 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:04 old-k8s-version-018253 kubelet[661]: E0401 20:36:04.499741 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.662017 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:08 old-k8s-version-018253 kubelet[661]: E0401 20:36:08.629509 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.662240 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:10 old-k8s-version-018253 kubelet[661]: E0401 20:36:10.437611 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.662605 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:19 old-k8s-version-018253 kubelet[661]: E0401 20:36:19.437200 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.662830 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:22 old-k8s-version-018253 kubelet[661]: E0401 20:36:22.437564 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.663183 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:30 old-k8s-version-018253 kubelet[661]: E0401 20:36:30.437610 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.663396 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:35 old-k8s-version-018253 kubelet[661]: E0401 20:36:35.437428 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.663747 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:45 old-k8s-version-018253 kubelet[661]: E0401 20:36:45.437026 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.663953 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:48 old-k8s-version-018253 kubelet[661]: E0401 20:36:48.437377 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.664315 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:57 old-k8s-version-018253 kubelet[661]: E0401 20:36:57.437066 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.664523 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:02 old-k8s-version-018253 kubelet[661]: E0401 20:37:02.437440 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.664877 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:10 old-k8s-version-018253 kubelet[661]: E0401 20:37:10.437686 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.665086 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:15 old-k8s-version-018253 kubelet[661]: E0401 20:37:15.437747 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.665489 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:23 old-k8s-version-018253 kubelet[661]: E0401 20:37:23.437103 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.665700 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:27 old-k8s-version-018253 kubelet[661]: E0401 20:37:27.437416 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.666056 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: E0401 20:37:35.438272 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.666510 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.666882 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.667105 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.667455 219087 logs.go:138] Found kubelet problem: Apr 01 20:38:01 old-k8s-version-018253 kubelet[661]: E0401 20:38:01.437151 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.667667 219087 logs.go:138] Found kubelet problem: Apr 01 20:38:05 old-k8s-version-018253 kubelet[661]: E0401 20:38:05.438853 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0401 20:38:07.667684 219087 logs.go:123] Gathering logs for kube-apiserver [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4] ...
I0401 20:38:07.667713 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4"
I0401 20:38:07.777836 219087 logs.go:123] Gathering logs for kube-apiserver [5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee] ...
I0401 20:38:07.777871 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:38:07.867675 219087 logs.go:123] Gathering logs for coredns [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50] ...
I0401 20:38:07.867709 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50"
I0401 20:38:07.919313 219087 logs.go:123] Gathering logs for kube-scheduler [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c] ...
I0401 20:38:07.919346 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c"
I0401 20:38:07.992501 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:38:07.992593 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0401 20:38:07.992718 219087 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0401 20:38:07.992769 219087 out.go:270] Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.992805 219087 out.go:270] Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.992864 219087 out.go:270] Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.992925 219087 out.go:270] Apr 01 20:38:01 old-k8s-version-018253 kubelet[661]: E0401 20:38:01.437151 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:38:01 old-k8s-version-018253 kubelet[661]: E0401 20:38:01.437151 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.992973 219087 out.go:270] Apr 01 20:38:05 old-k8s-version-018253 kubelet[661]: E0401 20:38:05.438853 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:38:05 old-k8s-version-018253 kubelet[661]: E0401 20:38:05.438853 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0401 20:38:07.993039 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:38:07.993064 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 20:38:17.993573 219087 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0401 20:38:18.011180 219087 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0401 20:38:18.014651 219087 out.go:201]
W0401 20:38:18.017459 219087 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0401 20:38:18.017521 219087 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0401 20:38:18.017540 219087 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0401 20:38:18.017545 219087 out.go:270] *
*
W0401 20:38:18.018459 219087 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0401 20:38:18.020413 219087 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-018253 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-018253
helpers_test.go:235: (dbg) docker inspect old-k8s-version-018253:
-- stdout --
[
{
"Id": "dd98f723bc5884d200afb3e1b8b8be6d5bcccf3629a7f265c2ccbd45b84da2e7",
"Created": "2025-04-01T20:29:23.736597696Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 219216,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-01T20:32:09.20192437Z",
"FinishedAt": "2025-04-01T20:32:08.36554643Z"
},
"Image": "sha256:df0c2544fb3106b890f0a9ab81fcf49f97edb092b83e47f42288ad5dfe1f4b40",
"ResolvConfPath": "/var/lib/docker/containers/dd98f723bc5884d200afb3e1b8b8be6d5bcccf3629a7f265c2ccbd45b84da2e7/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/dd98f723bc5884d200afb3e1b8b8be6d5bcccf3629a7f265c2ccbd45b84da2e7/hostname",
"HostsPath": "/var/lib/docker/containers/dd98f723bc5884d200afb3e1b8b8be6d5bcccf3629a7f265c2ccbd45b84da2e7/hosts",
"LogPath": "/var/lib/docker/containers/dd98f723bc5884d200afb3e1b8b8be6d5bcccf3629a7f265c2ccbd45b84da2e7/dd98f723bc5884d200afb3e1b8b8be6d5bcccf3629a7f265c2ccbd45b84da2e7-json.log",
"Name": "/old-k8s-version-018253",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-018253:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-018253",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "dd98f723bc5884d200afb3e1b8b8be6d5bcccf3629a7f265c2ccbd45b84da2e7",
"LowerDir": "/var/lib/docker/overlay2/6f13f60729b6125e5e0c403d939168dadc7030bca47fdb1c6b83b7f68d55a47e-init/diff:/var/lib/docker/overlay2/ba64d5b03843e50452e33e54f0e1f6869280a08f7cd834ae5894ce121585f1fe/diff",
"MergedDir": "/var/lib/docker/overlay2/6f13f60729b6125e5e0c403d939168dadc7030bca47fdb1c6b83b7f68d55a47e/merged",
"UpperDir": "/var/lib/docker/overlay2/6f13f60729b6125e5e0c403d939168dadc7030bca47fdb1c6b83b7f68d55a47e/diff",
"WorkDir": "/var/lib/docker/overlay2/6f13f60729b6125e5e0c403d939168dadc7030bca47fdb1c6b83b7f68d55a47e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-018253",
"Source": "/var/lib/docker/volumes/old-k8s-version-018253/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-018253",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-018253",
"name.minikube.sigs.k8s.io": "old-k8s-version-018253",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2cb44d43e1cf8381e82cb78cb2b670f644bc44f685426855d6c187c86b881ac4",
"SandboxKey": "/var/run/docker/netns/2cb44d43e1cf",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33063"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33064"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33067"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33065"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33066"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-018253": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "12:6a:7b:5f:09:bc",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "0a796db58f1f2a429254cf761de8b9d5f4d06baf6a66ed8555f859019b5b5681",
"EndpointID": "ef055c589a3ceb6926dc943bc7f7019b6bcbc10292ce0f09a4653d84db6653d0",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-018253",
"dd98f723bc58"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-018253 -n old-k8s-version-018253
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-018253 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-018253 logs -n 25: (2.037773005s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| start | -p force-systemd-flag-175021 | force-systemd-flag-175021 | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-flag-175021 | force-systemd-flag-175021 | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-flag-175021 | force-systemd-flag-175021 | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
| start | -p cert-options-439788 | cert-options-439788 | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:29 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-439788 ssh | cert-options-439788 | jenkins | v1.35.0 | 01 Apr 25 20:29 UTC | 01 Apr 25 20:29 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-439788 -- sudo | cert-options-439788 | jenkins | v1.35.0 | 01 Apr 25 20:29 UTC | 01 Apr 25 20:29 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-439788 | cert-options-439788 | jenkins | v1.35.0 | 01 Apr 25 20:29 UTC | 01 Apr 25 20:29 UTC |
| start | -p old-k8s-version-018253 | old-k8s-version-018253 | jenkins | v1.35.0 | 01 Apr 25 20:29 UTC | 01 Apr 25 20:31 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-624012 | cert-expiration-624012 | jenkins | v1.35.0 | 01 Apr 25 20:30 UTC | 01 Apr 25 20:30 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-624012 | cert-expiration-624012 | jenkins | v1.35.0 | 01 Apr 25 20:30 UTC | 01 Apr 25 20:30 UTC |
| start | -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:30 UTC | 01 Apr 25 20:32 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-018253 | old-k8s-version-018253 | jenkins | v1.35.0 | 01 Apr 25 20:31 UTC | 01 Apr 25 20:31 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-018253 | old-k8s-version-018253 | jenkins | v1.35.0 | 01 Apr 25 20:31 UTC | 01 Apr 25 20:32 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-018253 | old-k8s-version-018253 | jenkins | v1.35.0 | 01 Apr 25 20:32 UTC | 01 Apr 25 20:32 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-018253 | old-k8s-version-018253 | jenkins | v1.35.0 | 01 Apr 25 20:32 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:32 UTC | 01 Apr 25 20:32 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:32 UTC | 01 Apr 25 20:32 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:32 UTC | 01 Apr 25 20:32 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:32 UTC | 01 Apr 25 20:37 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| image | no-preload-463422 image list | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:37 UTC | 01 Apr 25 20:37 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:37 UTC | 01 Apr 25 20:37 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:37 UTC | 01 Apr 25 20:37 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:37 UTC | 01 Apr 25 20:37 UTC |
| delete | -p no-preload-463422 | no-preload-463422 | jenkins | v1.35.0 | 01 Apr 25 20:37 UTC | 01 Apr 25 20:37 UTC |
| start | -p embed-certs-797670 | embed-certs-797670 | jenkins | v1.35.0 | 01 Apr 25 20:37 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/01 20:37:25
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0401 20:37:25.137601 228685 out.go:345] Setting OutFile to fd 1 ...
I0401 20:37:25.137780 228685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 20:37:25.137792 228685 out.go:358] Setting ErrFile to fd 2...
I0401 20:37:25.137797 228685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 20:37:25.138111 228685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-2281/.minikube/bin
I0401 20:37:25.138712 228685 out.go:352] Setting JSON to false
I0401 20:37:25.139771 228685 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4791,"bootTime":1743535055,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0401 20:37:25.139836 228685 start.go:139] virtualization:
I0401 20:37:25.144336 228685 out.go:177] * [embed-certs-797670] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0401 20:37:25.147971 228685 out.go:177] - MINIKUBE_LOCATION=20506
I0401 20:37:25.148013 228685 notify.go:220] Checking for updates...
I0401 20:37:25.154571 228685 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0401 20:37:25.157811 228685 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20506-2281/kubeconfig
I0401 20:37:25.161103 228685 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-2281/.minikube
I0401 20:37:25.165388 228685 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0401 20:37:25.168464 228685 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0401 20:37:25.172117 228685 config.go:182] Loaded profile config "old-k8s-version-018253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0401 20:37:25.172230 228685 driver.go:394] Setting default libvirt URI to qemu:///system
I0401 20:37:25.202388 228685 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0401 20:37:25.202512 228685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0401 20:37:25.267592 228685 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-01 20:37:25.258254018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0401 20:37:25.267698 228685 docker.go:318] overlay module found
I0401 20:37:25.271006 228685 out.go:177] * Using the docker driver based on user configuration
I0401 20:37:25.273995 228685 start.go:297] selected driver: docker
I0401 20:37:25.274012 228685 start.go:901] validating driver "docker" against <nil>
I0401 20:37:25.274027 228685 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0401 20:37:25.274783 228685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0401 20:37:25.333812 228685 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-01 20:37:25.322786922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0401 20:37:25.333969 228685 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0401 20:37:25.334212 228685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0401 20:37:25.337289 228685 out.go:177] * Using Docker driver with root privileges
I0401 20:37:25.340282 228685 cni.go:84] Creating CNI manager for ""
I0401 20:37:25.340354 228685 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0401 20:37:25.340367 228685 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0401 20:37:25.340435 228685 start.go:340] cluster config:
{Name:embed-certs-797670 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-797670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0401 20:37:25.345505 228685 out.go:177] * Starting "embed-certs-797670" primary control-plane node in "embed-certs-797670" cluster
I0401 20:37:25.348409 228685 cache.go:121] Beginning downloading kic base image for docker with containerd
I0401 20:37:25.351329 228685 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0401 20:37:25.354283 228685 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0401 20:37:25.354331 228685 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0401 20:37:25.354365 228685 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-2281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
I0401 20:37:25.354375 228685 cache.go:56] Caching tarball of preloaded images
I0401 20:37:25.354466 228685 preload.go:172] Found /home/jenkins/minikube-integration/20506-2281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0401 20:37:25.354477 228685 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0401 20:37:25.354584 228685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/config.json ...
I0401 20:37:25.354611 228685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/config.json: {Name:mk89e3c3aa7ef6a8c2de66e8343f187a08c0d29d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:37:25.378435 228685 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0401 20:37:25.378462 228685 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0401 20:37:25.378477 228685 cache.go:230] Successfully downloaded all kic artifacts
I0401 20:37:25.378501 228685 start.go:360] acquireMachinesLock for embed-certs-797670: {Name:mk5b2bba8dcf43cc3c71218e61650dce53bf73f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 20:37:25.378662 228685 start.go:364] duration metric: took 138.314µs to acquireMachinesLock for "embed-certs-797670"
I0401 20:37:25.378697 228685 start.go:93] Provisioning new machine with config: &{Name:embed-certs-797670 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-797670 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0401 20:37:25.378780 228685 start.go:125] createHost starting for "" (driver="docker")
I0401 20:37:26.083348 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:28.585817 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:25.382237 228685 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0401 20:37:25.382538 228685 start.go:159] libmachine.API.Create for "embed-certs-797670" (driver="docker")
I0401 20:37:25.382576 228685 client.go:168] LocalClient.Create starting
I0401 20:37:25.382654 228685 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem
I0401 20:37:25.382691 228685 main.go:141] libmachine: Decoding PEM data...
I0401 20:37:25.382712 228685 main.go:141] libmachine: Parsing certificate...
I0401 20:37:25.382765 228685 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-2281/.minikube/certs/cert.pem
I0401 20:37:25.382786 228685 main.go:141] libmachine: Decoding PEM data...
I0401 20:37:25.382797 228685 main.go:141] libmachine: Parsing certificate...
I0401 20:37:25.383158 228685 cli_runner.go:164] Run: docker network inspect embed-certs-797670 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0401 20:37:25.400417 228685 cli_runner.go:211] docker network inspect embed-certs-797670 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0401 20:37:25.400507 228685 network_create.go:284] running [docker network inspect embed-certs-797670] to gather additional debugging logs...
I0401 20:37:25.400530 228685 cli_runner.go:164] Run: docker network inspect embed-certs-797670
W0401 20:37:25.418923 228685 cli_runner.go:211] docker network inspect embed-certs-797670 returned with exit code 1
I0401 20:37:25.418959 228685 network_create.go:287] error running [docker network inspect embed-certs-797670]: docker network inspect embed-certs-797670: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-797670 not found
I0401 20:37:25.418974 228685 network_create.go:289] output of [docker network inspect embed-certs-797670]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-797670 not found
** /stderr **
I0401 20:37:25.419078 228685 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0401 20:37:25.435941 228685 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0ccc5cb20c0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:bc:5d:02:ae:e5} reservation:<nil>}
I0401 20:37:25.436449 228685 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b337e960fe97 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:4d:af:4d:5b:f2} reservation:<nil>}
I0401 20:37:25.436819 228685 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-235589b5b4e1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:d5:43:f5:d6:07} reservation:<nil>}
I0401 20:37:25.437160 228685 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0a796db58f1f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:0f:ef:a3:1a:7b} reservation:<nil>}
I0401 20:37:25.437685 228685 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a08860}
I0401 20:37:25.437709 228685 network_create.go:124] attempt to create docker network embed-certs-797670 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0401 20:37:25.437768 228685 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-797670 embed-certs-797670
I0401 20:37:25.506683 228685 network_create.go:108] docker network embed-certs-797670 192.168.85.0/24 created
I0401 20:37:25.506714 228685 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-797670" container
I0401 20:37:25.506806 228685 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0401 20:37:25.525637 228685 cli_runner.go:164] Run: docker volume create embed-certs-797670 --label name.minikube.sigs.k8s.io=embed-certs-797670 --label created_by.minikube.sigs.k8s.io=true
I0401 20:37:25.544526 228685 oci.go:103] Successfully created a docker volume embed-certs-797670
I0401 20:37:25.544614 228685 cli_runner.go:164] Run: docker run --rm --name embed-certs-797670-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-797670 --entrypoint /usr/bin/test -v embed-certs-797670:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
I0401 20:37:26.127657 228685 oci.go:107] Successfully prepared a docker volume embed-certs-797670
I0401 20:37:26.127701 228685 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0401 20:37:26.127721 228685 kic.go:194] Starting extracting preloaded images to volume ...
I0401 20:37:26.127805 228685 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-2281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-797670:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
I0401 20:37:31.084684 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:33.583140 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:31.072877 228685 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-2281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-797670:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.945022837s)
I0401 20:37:31.072913 228685 kic.go:203] duration metric: took 4.945188195s to extract preloaded images to volume ...
W0401 20:37:31.073105 228685 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0401 20:37:31.073229 228685 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0401 20:37:31.141342 228685 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-797670 --name embed-certs-797670 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-797670 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-797670 --network embed-certs-797670 --ip 192.168.85.2 --volume embed-certs-797670:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
I0401 20:37:31.445810 228685 cli_runner.go:164] Run: docker container inspect embed-certs-797670 --format={{.State.Running}}
I0401 20:37:31.462792 228685 cli_runner.go:164] Run: docker container inspect embed-certs-797670 --format={{.State.Status}}
I0401 20:37:31.488370 228685 cli_runner.go:164] Run: docker exec embed-certs-797670 stat /var/lib/dpkg/alternatives/iptables
I0401 20:37:31.541321 228685 oci.go:144] the created container "embed-certs-797670" has a running status.
I0401 20:37:31.541347 228685 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa...
I0401 20:37:31.652368 228685 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0401 20:37:31.677229 228685 cli_runner.go:164] Run: docker container inspect embed-certs-797670 --format={{.State.Status}}
I0401 20:37:31.701570 228685 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0401 20:37:31.701588 228685 kic_runner.go:114] Args: [docker exec --privileged embed-certs-797670 chown docker:docker /home/docker/.ssh/authorized_keys]
I0401 20:37:31.799229 228685 cli_runner.go:164] Run: docker container inspect embed-certs-797670 --format={{.State.Status}}
I0401 20:37:31.826087 228685 machine.go:93] provisionDockerMachine start ...
I0401 20:37:31.826214 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:37:31.855028 228685 main.go:141] libmachine: Using SSH client type: native
I0401 20:37:31.855370 228685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33073 <nil> <nil>}
I0401 20:37:31.855381 228685 main.go:141] libmachine: About to run SSH command:
hostname
I0401 20:37:31.856487 228685 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0401 20:37:34.980363 228685 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-797670
I0401 20:37:34.980395 228685 ubuntu.go:169] provisioning hostname "embed-certs-797670"
I0401 20:37:34.980457 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:37:34.999787 228685 main.go:141] libmachine: Using SSH client type: native
I0401 20:37:35.000121 228685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33073 <nil> <nil>}
I0401 20:37:35.000138 228685 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-797670 && echo "embed-certs-797670" | sudo tee /etc/hostname
I0401 20:37:35.142140 228685 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-797670
I0401 20:37:35.142270 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:37:35.163266 228685 main.go:141] libmachine: Using SSH client type: native
I0401 20:37:35.163606 228685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33073 <nil> <nil>}
I0401 20:37:35.163628 228685 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-797670' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-797670/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-797670' | sudo tee -a /etc/hosts;
fi
fi
I0401 20:37:35.289776 228685 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0401 20:37:35.289850 228685 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-2281/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-2281/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-2281/.minikube}
I0401 20:37:35.289895 228685 ubuntu.go:177] setting up certificates
I0401 20:37:35.289935 228685 provision.go:84] configureAuth start
I0401 20:37:35.290053 228685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-797670
I0401 20:37:35.308064 228685 provision.go:143] copyHostCerts
I0401 20:37:35.308144 228685 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-2281/.minikube/ca.pem, removing ...
I0401 20:37:35.308157 228685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-2281/.minikube/ca.pem
I0401 20:37:35.308236 228685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-2281/.minikube/ca.pem (1078 bytes)
I0401 20:37:35.308332 228685 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-2281/.minikube/cert.pem, removing ...
I0401 20:37:35.308342 228685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-2281/.minikube/cert.pem
I0401 20:37:35.308371 228685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-2281/.minikube/cert.pem (1123 bytes)
I0401 20:37:35.308430 228685 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-2281/.minikube/key.pem, removing ...
I0401 20:37:35.308438 228685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-2281/.minikube/key.pem
I0401 20:37:35.308465 228685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-2281/.minikube/key.pem (1675 bytes)
I0401 20:37:35.308533 228685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-2281/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca-key.pem org=jenkins.embed-certs-797670 san=[127.0.0.1 192.168.85.2 embed-certs-797670 localhost minikube]
I0401 20:37:36.826135 228685 provision.go:177] copyRemoteCerts
I0401 20:37:36.826232 228685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0401 20:37:36.826292 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:37:36.853196 228685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa Username:docker}
I0401 20:37:36.946273 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0401 20:37:36.973173 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0401 20:37:36.997740 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0401 20:37:37.030123 228685 provision.go:87] duration metric: took 1.740139087s to configureAuth
I0401 20:37:37.030151 228685 ubuntu.go:193] setting minikube options for container-runtime
I0401 20:37:37.030379 228685 config.go:182] Loaded profile config "embed-certs-797670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0401 20:37:37.030392 228685 machine.go:96] duration metric: took 5.204282708s to provisionDockerMachine
I0401 20:37:37.030400 228685 client.go:171] duration metric: took 11.64781467s to LocalClient.Create
I0401 20:37:37.030430 228685 start.go:167] duration metric: took 11.647893883s to libmachine.API.Create "embed-certs-797670"
I0401 20:37:37.030442 228685 start.go:293] postStartSetup for "embed-certs-797670" (driver="docker")
I0401 20:37:37.030451 228685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0401 20:37:37.030527 228685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0401 20:37:37.030572 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:37:37.048633 228685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa Username:docker}
I0401 20:37:37.138862 228685 ssh_runner.go:195] Run: cat /etc/os-release
I0401 20:37:37.142547 228685 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0401 20:37:37.142586 228685 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0401 20:37:37.142597 228685 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0401 20:37:37.142603 228685 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0401 20:37:37.142614 228685 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-2281/.minikube/addons for local assets ...
I0401 20:37:37.142672 228685 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-2281/.minikube/files for local assets ...
I0401 20:37:37.142763 228685 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-2281/.minikube/files/etc/ssl/certs/75972.pem -> 75972.pem in /etc/ssl/certs
I0401 20:37:37.142877 228685 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0401 20:37:37.154266 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/files/etc/ssl/certs/75972.pem --> /etc/ssl/certs/75972.pem (1708 bytes)
I0401 20:37:37.183324 228685 start.go:296] duration metric: took 152.86772ms for postStartSetup
I0401 20:37:37.183721 228685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-797670
I0401 20:37:37.203128 228685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/config.json ...
I0401 20:37:37.203416 228685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0401 20:37:37.203466 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:37:37.220745 228685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa Username:docker}
I0401 20:37:37.310139 228685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0401 20:37:37.315520 228685 start.go:128] duration metric: took 11.936724567s to createHost
I0401 20:37:37.315542 228685 start.go:83] releasing machines lock for "embed-certs-797670", held for 11.93686444s
I0401 20:37:37.315614 228685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-797670
I0401 20:37:37.332494 228685 ssh_runner.go:195] Run: cat /version.json
I0401 20:37:37.332553 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:37:37.332817 228685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0401 20:37:37.332908 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:37:37.361348 228685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa Username:docker}
I0401 20:37:37.363252 228685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa Username:docker}
I0401 20:37:37.601141 228685 ssh_runner.go:195] Run: systemctl --version
I0401 20:37:37.605785 228685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0401 20:37:37.610140 228685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0401 20:37:37.642176 228685 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0401 20:37:37.642261 228685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0401 20:37:37.675860 228685 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0401 20:37:37.675884 228685 start.go:495] detecting cgroup driver to use...
I0401 20:37:37.675930 228685 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0401 20:37:37.676027 228685 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0401 20:37:37.689622 228685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0401 20:37:37.701743 228685 docker.go:217] disabling cri-docker service (if available) ...
I0401 20:37:37.701821 228685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0401 20:37:37.716634 228685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0401 20:37:37.731345 228685 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0401 20:37:37.834763 228685 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0401 20:37:37.926659 228685 docker.go:233] disabling docker service ...
I0401 20:37:37.926752 228685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0401 20:37:37.951283 228685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0401 20:37:37.963994 228685 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0401 20:37:38.078134 228685 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0401 20:37:38.174329 228685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0401 20:37:38.185920 228685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0401 20:37:38.203206 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0401 20:37:38.215343 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0401 20:37:38.228609 228685 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0401 20:37:38.228695 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0401 20:37:38.241429 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0401 20:37:38.252966 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0401 20:37:38.263384 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0401 20:37:38.274173 228685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0401 20:37:38.283562 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0401 20:37:38.294675 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0401 20:37:38.306558 228685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0401 20:37:38.317311 228685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0401 20:37:38.327025 228685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0401 20:37:38.336475 228685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0401 20:37:38.429356 228685 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0401 20:37:38.598765 228685 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0401 20:37:38.598836 228685 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0401 20:37:38.603741 228685 start.go:563] Will wait 60s for crictl version
I0401 20:37:38.603814 228685 ssh_runner.go:195] Run: which crictl
I0401 20:37:38.608044 228685 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0401 20:37:38.648342 228685 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0401 20:37:38.648457 228685 ssh_runner.go:195] Run: containerd --version
I0401 20:37:38.675138 228685 ssh_runner.go:195] Run: containerd --version
I0401 20:37:38.705692 228685 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
I0401 20:37:35.584442 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:38.084071 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:38.708589 228685 cli_runner.go:164] Run: docker network inspect embed-certs-797670 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0401 20:37:38.724352 228685 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0401 20:37:38.728339 228685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0401 20:37:38.739884 228685 kubeadm.go:883] updating cluster {Name:embed-certs-797670 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-797670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0401 20:37:38.740010 228685 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0401 20:37:38.740075 228685 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 20:37:38.789389 228685 containerd.go:627] all images are preloaded for containerd runtime.
I0401 20:37:38.789412 228685 containerd.go:534] Images already preloaded, skipping extraction
I0401 20:37:38.789473 228685 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 20:37:38.829284 228685 containerd.go:627] all images are preloaded for containerd runtime.
I0401 20:37:38.829310 228685 cache_images.go:84] Images are preloaded, skipping loading
I0401 20:37:38.829319 228685 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
I0401 20:37:38.829411 228685 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-797670 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-797670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0401 20:37:38.829479 228685 ssh_runner.go:195] Run: sudo crictl info
I0401 20:37:38.866967 228685 cni.go:84] Creating CNI manager for ""
I0401 20:37:38.866990 228685 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0401 20:37:38.867000 228685 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0401 20:37:38.867041 228685 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-797670 NodeName:embed-certs-797670 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0401 20:37:38.867166 228685 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-797670"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0401 20:37:38.867239 228685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0401 20:37:38.877189 228685 binaries.go:44] Found k8s binaries, skipping transfer
I0401 20:37:38.877315 228685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0401 20:37:38.886701 228685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0401 20:37:38.906105 228685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0401 20:37:38.925583 228685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0401 20:37:38.944925 228685 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0401 20:37:38.948636 228685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0401 20:37:38.959826 228685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0401 20:37:39.046311 228685 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0401 20:37:39.063222 228685 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670 for IP: 192.168.85.2
I0401 20:37:39.063248 228685 certs.go:194] generating shared ca certs ...
I0401 20:37:39.063265 228685 certs.go:226] acquiring lock for ca certs: {Name:mk9fe0d3c9420af86b4bae52abd5f6d6b2c4675e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:37:39.063451 228685 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-2281/.minikube/ca.key
I0401 20:37:39.063516 228685 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-2281/.minikube/proxy-client-ca.key
I0401 20:37:39.063533 228685 certs.go:256] generating profile certs ...
I0401 20:37:39.063609 228685 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/client.key
I0401 20:37:39.063647 228685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/client.crt with IP's: []
I0401 20:37:39.910253 228685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/client.crt ...
I0401 20:37:39.910283 228685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/client.crt: {Name:mkac0fa5967c251b2bcefc476c8a7910930fc926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:37:39.911097 228685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/client.key ...
I0401 20:37:39.911115 228685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/client.key: {Name:mkf4ed189ee2f465a85e7ce911a03b7ef92feb71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:37:39.911800 228685 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.key.6b8836d6
I0401 20:37:39.911827 228685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.crt.6b8836d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0401 20:37:40.888716 228685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.crt.6b8836d6 ...
I0401 20:37:40.888746 228685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.crt.6b8836d6: {Name:mka0ddb4719dad07292fdff1fad28231f993d618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:37:40.888926 228685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.key.6b8836d6 ...
I0401 20:37:40.888951 228685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.key.6b8836d6: {Name:mkd8470e7f70694c3fcdcf923803d98cf8d645f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:37:40.889604 228685 certs.go:381] copying /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.crt.6b8836d6 -> /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.crt
I0401 20:37:40.889738 228685 certs.go:385] copying /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.key.6b8836d6 -> /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.key
I0401 20:37:40.889810 228685 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/proxy-client.key
I0401 20:37:40.889831 228685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/proxy-client.crt with IP's: []
I0401 20:37:41.146801 228685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/proxy-client.crt ...
I0401 20:37:41.146831 228685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/proxy-client.crt: {Name:mk22573599f609ba19ce4ea8551d94540def11b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:37:41.147588 228685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/proxy-client.key ...
I0401 20:37:41.147608 228685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/proxy-client.key: {Name:mk24e8971958efa7479a6231b8ec8e05ce0c7e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:37:41.148395 228685 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/7597.pem (1338 bytes)
W0401 20:37:41.148457 228685 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-2281/.minikube/certs/7597_empty.pem, impossibly tiny 0 bytes
I0401 20:37:41.148473 228685 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca-key.pem (1675 bytes)
I0401 20:37:41.148519 228685 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/ca.pem (1078 bytes)
I0401 20:37:41.148563 228685 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/cert.pem (1123 bytes)
I0401 20:37:41.148595 228685 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/certs/key.pem (1675 bytes)
I0401 20:37:41.148658 228685 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-2281/.minikube/files/etc/ssl/certs/75972.pem (1708 bytes)
I0401 20:37:41.149281 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0401 20:37:41.176903 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0401 20:37:41.203651 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0401 20:37:41.233228 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0401 20:37:41.258684 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0401 20:37:41.283913 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0401 20:37:41.311029 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0401 20:37:41.338131 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/profiles/embed-certs-797670/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0401 20:37:41.365051 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/certs/7597.pem --> /usr/share/ca-certificates/7597.pem (1338 bytes)
I0401 20:37:41.392460 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/files/etc/ssl/certs/75972.pem --> /usr/share/ca-certificates/75972.pem (1708 bytes)
I0401 20:37:41.418627 228685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-2281/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0401 20:37:41.443753 228685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0401 20:37:41.468154 228685 ssh_runner.go:195] Run: openssl version
I0401 20:37:41.474214 228685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75972.pem && ln -fs /usr/share/ca-certificates/75972.pem /etc/ssl/certs/75972.pem"
I0401 20:37:41.484828 228685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75972.pem
I0401 20:37:41.491275 228685 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 1 19:52 /usr/share/ca-certificates/75972.pem
I0401 20:37:41.491385 228685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75972.pem
I0401 20:37:41.502033 228685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75972.pem /etc/ssl/certs/3ec20f2e.0"
I0401 20:37:41.513714 228685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0401 20:37:41.524160 228685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0401 20:37:41.529699 228685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 1 19:45 /usr/share/ca-certificates/minikubeCA.pem
I0401 20:37:41.529798 228685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0401 20:37:41.539197 228685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0401 20:37:41.549921 228685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7597.pem && ln -fs /usr/share/ca-certificates/7597.pem /etc/ssl/certs/7597.pem"
I0401 20:37:41.560897 228685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7597.pem
I0401 20:37:41.564883 228685 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 1 19:52 /usr/share/ca-certificates/7597.pem
I0401 20:37:41.565085 228685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7597.pem
I0401 20:37:41.572796 228685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7597.pem /etc/ssl/certs/51391683.0"
I0401 20:37:41.585026 228685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0401 20:37:41.588730 228685 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0401 20:37:41.588845 228685 kubeadm.go:392] StartCluster: {Name:embed-certs-797670 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-797670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0401 20:37:41.588973 228685 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0401 20:37:41.589035 228685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0401 20:37:41.629712 228685 cri.go:89] found id: ""
I0401 20:37:41.629849 228685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0401 20:37:41.639186 228685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0401 20:37:41.648716 228685 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0401 20:37:41.648826 228685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0401 20:37:41.658092 228685 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0401 20:37:41.658117 228685 kubeadm.go:157] found existing configuration files:
I0401 20:37:41.658169 228685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0401 20:37:41.668089 228685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0401 20:37:41.668163 228685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0401 20:37:41.677501 228685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0401 20:37:41.687258 228685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0401 20:37:41.687369 228685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0401 20:37:41.696478 228685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0401 20:37:41.706286 228685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0401 20:37:41.706401 228685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0401 20:37:41.716561 228685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0401 20:37:41.725892 228685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0401 20:37:41.726004 228685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0401 20:37:41.735419 228685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0401 20:37:41.794878 228685 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0401 20:37:41.795164 228685 kubeadm.go:310] [preflight] Running pre-flight checks
I0401 20:37:41.822013 228685 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0401 20:37:41.822172 228685 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1081-aws[0m
I0401 20:37:41.822235 228685 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0401 20:37:41.822303 228685 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0401 20:37:41.822377 228685 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0401 20:37:41.822439 228685 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0401 20:37:41.822512 228685 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0401 20:37:41.822574 228685 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0401 20:37:41.822643 228685 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0401 20:37:41.822705 228685 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0401 20:37:41.822786 228685 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0401 20:37:41.822848 228685 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0401 20:37:41.892115 228685 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0401 20:37:41.892268 228685 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0401 20:37:41.892419 228685 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0401 20:37:41.899997 228685 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0401 20:37:40.084260 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:42.084392 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:41.906013 228685 out.go:235] - Generating certificates and keys ...
I0401 20:37:41.906240 228685 kubeadm.go:310] [certs] Using existing ca certificate authority
I0401 20:37:41.906351 228685 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0401 20:37:42.309137 228685 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0401 20:37:42.800716 228685 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0401 20:37:43.068280 228685 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0401 20:37:43.229952 228685 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0401 20:37:43.523468 228685 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0401 20:37:43.523815 228685 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-797670 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0401 20:37:43.705911 228685 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0401 20:37:43.706054 228685 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-797670 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0401 20:37:44.335442 228685 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0401 20:37:44.699130 228685 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0401 20:37:44.843300 228685 kubeadm.go:310] [certs] Generating "sa" key and public key
I0401 20:37:44.843527 228685 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0401 20:37:45.105924 228685 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0401 20:37:45.450209 228685 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0401 20:37:46.068124 228685 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0401 20:37:47.410303 228685 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0401 20:37:48.022947 228685 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0401 20:37:48.023647 228685 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0401 20:37:48.028847 228685 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0401 20:37:44.582913 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:46.584576 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:48.585200 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:48.032363 228685 out.go:235] - Booting up control plane ...
I0401 20:37:48.032476 228685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0401 20:37:48.032558 228685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0401 20:37:48.033472 228685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0401 20:37:48.044709 228685 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0401 20:37:48.052191 228685 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0401 20:37:48.052617 228685 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0401 20:37:48.176261 228685 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0401 20:37:48.176378 228685 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0401 20:37:50.585792 219087 pod_ready.go:103] pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace has status "Ready":"False"
I0401 20:37:52.583380 219087 pod_ready.go:82] duration metric: took 4m0.00606317s for pod "metrics-server-9975d5f86-xxnsk" in "kube-system" namespace to be "Ready" ...
E0401 20:37:52.583410 219087 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0401 20:37:52.583420 219087 pod_ready.go:39] duration metric: took 5m18.259207958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0401 20:37:52.583436 219087 api_server.go:52] waiting for apiserver process to appear ...
I0401 20:37:52.583481 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0401 20:37:52.583549 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0401 20:37:52.665330 219087 cri.go:89] found id: "aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4"
I0401 20:37:52.665355 219087 cri.go:89] found id: "5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:37:52.665360 219087 cri.go:89] found id: ""
I0401 20:37:52.665368 219087 logs.go:282] 2 containers: [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4 5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee]
I0401 20:37:52.665434 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.678071 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.688183 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0401 20:37:52.688280 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0401 20:37:52.790260 219087 cri.go:89] found id: "ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db"
I0401 20:37:52.790286 219087 cri.go:89] found id: "6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:37:52.790308 219087 cri.go:89] found id: ""
I0401 20:37:52.790316 219087 logs.go:282] 2 containers: [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db 6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94]
I0401 20:37:52.790389 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.797624 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.801888 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0401 20:37:52.801979 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0401 20:37:52.884920 219087 cri.go:89] found id: "39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50"
I0401 20:37:52.884963 219087 cri.go:89] found id: "bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:37:52.884968 219087 cri.go:89] found id: ""
I0401 20:37:52.884975 219087 logs.go:282] 2 containers: [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50 bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73]
I0401 20:37:52.885039 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.892323 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.901426 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0401 20:37:52.901512 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0401 20:37:52.973562 219087 cri.go:89] found id: "c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c"
I0401 20:37:52.973587 219087 cri.go:89] found id: "1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:37:52.973592 219087 cri.go:89] found id: ""
I0401 20:37:52.973599 219087 logs.go:282] 2 containers: [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c 1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba]
I0401 20:37:52.973667 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.977678 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:52.985685 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0401 20:37:52.985764 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0401 20:37:53.067543 219087 cri.go:89] found id: "48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0"
I0401 20:37:53.067565 219087 cri.go:89] found id: "35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:37:53.067570 219087 cri.go:89] found id: ""
I0401 20:37:53.067577 219087 logs.go:282] 2 containers: [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0 35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428]
I0401 20:37:53.067633 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.072163 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.078298 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0401 20:37:53.078378 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0401 20:37:53.138643 219087 cri.go:89] found id: "28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593"
I0401 20:37:53.138669 219087 cri.go:89] found id: "4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:37:53.138676 219087 cri.go:89] found id: ""
I0401 20:37:53.138683 219087 logs.go:282] 2 containers: [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593 4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff]
I0401 20:37:53.138742 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.144221 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.153619 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0401 20:37:53.153699 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0401 20:37:53.215208 219087 cri.go:89] found id: "094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4"
I0401 20:37:53.215235 219087 cri.go:89] found id: "e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:37:53.215247 219087 cri.go:89] found id: ""
I0401 20:37:53.215256 219087 logs.go:282] 2 containers: [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4 e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a]
I0401 20:37:53.215340 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.221590 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.225539 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0401 20:37:53.225621 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0401 20:37:53.294839 219087 cri.go:89] found id: "ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8"
I0401 20:37:53.294871 219087 cri.go:89] found id: "abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:37:53.294876 219087 cri.go:89] found id: ""
I0401 20:37:53.294884 219087 logs.go:282] 2 containers: [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8 abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829]
I0401 20:37:53.294950 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.301523 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.312458 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0401 20:37:53.312543 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0401 20:37:53.410976 219087 cri.go:89] found id: "7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4"
I0401 20:37:53.411039 219087 cri.go:89] found id: ""
I0401 20:37:53.411061 219087 logs.go:282] 1 containers: [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4]
I0401 20:37:53.411155 219087 ssh_runner.go:195] Run: which crictl
I0401 20:37:53.414823 219087 logs.go:123] Gathering logs for kube-apiserver [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4] ...
I0401 20:37:53.414889 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4"
I0401 20:37:53.549012 219087 logs.go:123] Gathering logs for etcd [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db] ...
I0401 20:37:53.549088 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db"
I0401 20:37:53.626458 219087 logs.go:123] Gathering logs for kindnet [e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a] ...
I0401 20:37:53.626489 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:37:53.709958 219087 logs.go:123] Gathering logs for container status ...
I0401 20:37:53.709992 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0401 20:37:53.840358 219087 logs.go:123] Gathering logs for coredns [bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73] ...
I0401 20:37:53.840438 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:37:50.677600 228685 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.501599564s
I0401 20:37:50.677706 228685 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0401 20:37:58.179274 228685 kubeadm.go:310] [api-check] The API server is healthy after 7.501610671s
I0401 20:37:58.210767 228685 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0401 20:37:58.242832 228685 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0401 20:37:58.271920 228685 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0401 20:37:58.272128 228685 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-797670 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0401 20:37:58.284767 228685 kubeadm.go:310] [bootstrap-token] Using token: 29ht7f.gguds12ucx7mus4g
I0401 20:37:53.913031 219087 logs.go:123] Gathering logs for kube-proxy [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0] ...
I0401 20:37:53.913059 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0"
I0401 20:37:53.975055 219087 logs.go:123] Gathering logs for storage-provisioner [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8] ...
I0401 20:37:53.975125 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8"
I0401 20:37:54.058157 219087 logs.go:123] Gathering logs for storage-provisioner [abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829] ...
I0401 20:37:54.058231 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:37:54.140175 219087 logs.go:123] Gathering logs for kubernetes-dashboard [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4] ...
I0401 20:37:54.140245 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4"
I0401 20:37:54.206010 219087 logs.go:123] Gathering logs for containerd ...
I0401 20:37:54.206082 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0401 20:37:54.269170 219087 logs.go:123] Gathering logs for dmesg ...
I0401 20:37:54.269248 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0401 20:37:54.287600 219087 logs.go:123] Gathering logs for coredns [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50] ...
I0401 20:37:54.287626 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50"
I0401 20:37:54.340376 219087 logs.go:123] Gathering logs for kube-scheduler [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c] ...
I0401 20:37:54.340408 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c"
I0401 20:37:54.420992 219087 logs.go:123] Gathering logs for kube-scheduler [1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba] ...
I0401 20:37:54.421065 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:37:54.493191 219087 logs.go:123] Gathering logs for kube-proxy [35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428] ...
I0401 20:37:54.493263 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:37:54.544925 219087 logs.go:123] Gathering logs for kube-controller-manager [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593] ...
I0401 20:37:54.545011 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593"
I0401 20:37:54.621163 219087 logs.go:123] Gathering logs for kubelet ...
I0401 20:37:54.621244 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0401 20:37:54.687642 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.165613 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-kt7v7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-kt7v7" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:37:54.687935 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.165926 661 reflector.go:138] object-"kube-system"/"kindnet-token-4l6xv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-4l6xv" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:37:54.688174 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.166144 661 reflector.go:138] object-"default"/"default-token-8nfw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8nfw5" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:37:54.688407 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.166422 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:37:54.698160 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:35 old-k8s-version-018253 kubelet[661]: E0401 20:32:35.405325 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.698758 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:35 old-k8s-version-018253 kubelet[661]: E0401 20:32:35.732359 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.701664 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:49 old-k8s-version-018253 kubelet[661]: E0401 20:32:49.452865 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.704538 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:01 old-k8s-version-018253 kubelet[661]: E0401 20:33:01.460208 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.705624 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:07 old-k8s-version-018253 kubelet[661]: E0401 20:33:07.937980 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.705993 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:08 old-k8s-version-018253 kubelet[661]: E0401 20:33:08.943923 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.706362 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:09 old-k8s-version-018253 kubelet[661]: E0401 20:33:09.951616 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.708860 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:13 old-k8s-version-018253 kubelet[661]: E0401 20:33:13.447207 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.709847 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:24 old-k8s-version-018253 kubelet[661]: E0401 20:33:24.988675 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.710326 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:26 old-k8s-version-018253 kubelet[661]: E0401 20:33:26.441570 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.710687 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:28 old-k8s-version-018253 kubelet[661]: E0401 20:33:28.629070 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.711044 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:39 old-k8s-version-018253 kubelet[661]: E0401 20:33:39.437052 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.711257 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:41 old-k8s-version-018253 kubelet[661]: E0401 20:33:41.441575 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.711878 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:52 old-k8s-version-018253 kubelet[661]: E0401 20:33:52.087801 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.714396 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:56 old-k8s-version-018253 kubelet[661]: E0401 20:33:56.457822 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.714760 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:58 old-k8s-version-018253 kubelet[661]: E0401 20:33:58.629551 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.714972 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:09 old-k8s-version-018253 kubelet[661]: E0401 20:34:09.437914 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.715330 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:10 old-k8s-version-018253 kubelet[661]: E0401 20:34:10.440924 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.715548 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:21 old-k8s-version-018253 kubelet[661]: E0401 20:34:21.437543 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.715903 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:25 old-k8s-version-018253 kubelet[661]: E0401 20:34:25.436995 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.716120 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:36 old-k8s-version-018253 kubelet[661]: E0401 20:34:36.438242 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.716739 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:41 old-k8s-version-018253 kubelet[661]: E0401 20:34:41.247792 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.717117 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:48 old-k8s-version-018253 kubelet[661]: E0401 20:34:48.629699 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.717334 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:49 old-k8s-version-018253 kubelet[661]: E0401 20:34:49.437577 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.717693 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:00 old-k8s-version-018253 kubelet[661]: E0401 20:35:00.437297 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.717908 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:01 old-k8s-version-018253 kubelet[661]: E0401 20:35:01.437716 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.718286 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:11 old-k8s-version-018253 kubelet[661]: E0401 20:35:11.437081 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.718846 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:15 old-k8s-version-018253 kubelet[661]: E0401 20:35:15.437535 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.719214 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:25 old-k8s-version-018253 kubelet[661]: E0401 20:35:25.437035 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.721857 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:28 old-k8s-version-018253 kubelet[661]: E0401 20:35:28.447976 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:37:54.722203 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:37 old-k8s-version-018253 kubelet[661]: E0401 20:35:37.437132 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.722391 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:41 old-k8s-version-018253 kubelet[661]: E0401 20:35:41.437697 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.722804 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:49 old-k8s-version-018253 kubelet[661]: E0401 20:35:49.437570 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.722991 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:56 old-k8s-version-018253 kubelet[661]: E0401 20:35:56.438417 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.723581 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:04 old-k8s-version-018253 kubelet[661]: E0401 20:36:04.499741 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.723908 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:08 old-k8s-version-018253 kubelet[661]: E0401 20:36:08.629509 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.724090 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:10 old-k8s-version-018253 kubelet[661]: E0401 20:36:10.437611 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.724415 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:19 old-k8s-version-018253 kubelet[661]: E0401 20:36:19.437200 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.724601 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:22 old-k8s-version-018253 kubelet[661]: E0401 20:36:22.437564 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.724925 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:30 old-k8s-version-018253 kubelet[661]: E0401 20:36:30.437610 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.725168 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:35 old-k8s-version-018253 kubelet[661]: E0401 20:36:35.437428 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.725525 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:45 old-k8s-version-018253 kubelet[661]: E0401 20:36:45.437026 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.725738 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:48 old-k8s-version-018253 kubelet[661]: E0401 20:36:48.437377 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.726094 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:57 old-k8s-version-018253 kubelet[661]: E0401 20:36:57.437066 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.726316 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:02 old-k8s-version-018253 kubelet[661]: E0401 20:37:02.437440 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.726671 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:10 old-k8s-version-018253 kubelet[661]: E0401 20:37:10.437686 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.726884 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:15 old-k8s-version-018253 kubelet[661]: E0401 20:37:15.437747 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.727238 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:23 old-k8s-version-018253 kubelet[661]: E0401 20:37:23.437103 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.727525 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:27 old-k8s-version-018253 kubelet[661]: E0401 20:37:27.437416 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.727897 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: E0401 20:37:35.438272 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.728110 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:54.728565 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:54.728784 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0401 20:37:54.728817 219087 logs.go:123] Gathering logs for describe nodes ...
I0401 20:37:54.728847 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0401 20:37:54.965470 219087 logs.go:123] Gathering logs for kube-apiserver [5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee] ...
I0401 20:37:54.965547 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:37:55.101847 219087 logs.go:123] Gathering logs for etcd [6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94] ...
I0401 20:37:55.101932 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:37:55.185508 219087 logs.go:123] Gathering logs for kube-controller-manager [4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff] ...
I0401 20:37:55.185636 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:37:55.272998 219087 logs.go:123] Gathering logs for kindnet [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4] ...
I0401 20:37:55.273072 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4"
I0401 20:37:55.334670 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:37:55.334812 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0401 20:37:55.334896 219087 out.go:270] X Problems detected in kubelet:
W0401 20:37:55.335641 219087 out.go:270] Apr 01 20:37:27 old-k8s-version-018253 kubelet[661]: E0401 20:37:27.437416 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:55.335782 219087 out.go:270] Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: E0401 20:37:35.438272 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:55.335827 219087 out.go:270] Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:37:55.335910 219087 out.go:270] Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:37:55.335943 219087 out.go:270] Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0401 20:37:55.335989 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:37:55.336018 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 20:37:58.287793 228685 out.go:235] - Configuring RBAC rules ...
I0401 20:37:58.287929 228685 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0401 20:37:58.294981 228685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0401 20:37:58.304443 228685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0401 20:37:58.309123 228685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0401 20:37:58.314323 228685 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0401 20:37:58.318524 228685 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0401 20:37:58.588858 228685 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0401 20:37:59.012862 228685 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0401 20:37:59.586597 228685 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0401 20:37:59.588234 228685 kubeadm.go:310]
I0401 20:37:59.588310 228685 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0401 20:37:59.588325 228685 kubeadm.go:310]
I0401 20:37:59.588403 228685 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0401 20:37:59.588413 228685 kubeadm.go:310]
I0401 20:37:59.588438 228685 kubeadm.go:310] mkdir -p $HOME/.kube
I0401 20:37:59.588512 228685 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0401 20:37:59.588575 228685 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0401 20:37:59.588583 228685 kubeadm.go:310]
I0401 20:37:59.588637 228685 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0401 20:37:59.588648 228685 kubeadm.go:310]
I0401 20:37:59.588695 228685 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0401 20:37:59.588704 228685 kubeadm.go:310]
I0401 20:37:59.588755 228685 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0401 20:37:59.588833 228685 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0401 20:37:59.588903 228685 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0401 20:37:59.588911 228685 kubeadm.go:310]
I0401 20:37:59.589028 228685 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0401 20:37:59.589111 228685 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0401 20:37:59.589120 228685 kubeadm.go:310]
I0401 20:37:59.589203 228685 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 29ht7f.gguds12ucx7mus4g \
I0401 20:37:59.589307 228685 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a6e14e1df054d8614b8b74bb0ae23cafc5e1e528630003b65cdac001b3106e63 \
I0401 20:37:59.589331 228685 kubeadm.go:310] --control-plane
I0401 20:37:59.589336 228685 kubeadm.go:310]
I0401 20:37:59.589419 228685 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0401 20:37:59.589423 228685 kubeadm.go:310]
I0401 20:37:59.589504 228685 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 29ht7f.gguds12ucx7mus4g \
I0401 20:37:59.589605 228685 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a6e14e1df054d8614b8b74bb0ae23cafc5e1e528630003b65cdac001b3106e63
I0401 20:37:59.594649 228685 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0401 20:37:59.594957 228685 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1081-aws\n", err: exit status 1
I0401 20:37:59.595079 228685 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0401 20:37:59.595090 228685 cni.go:84] Creating CNI manager for ""
I0401 20:37:59.595097 228685 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0401 20:37:59.598263 228685 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0401 20:37:59.601133 228685 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0401 20:37:59.605185 228685 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
I0401 20:37:59.605207 228685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I0401 20:37:59.626375 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0401 20:37:59.956673 228685 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0401 20:37:59.956821 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:37:59.956900 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-797670 minikube.k8s.io/updated_at=2025_04_01T20_37_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-797670 minikube.k8s.io/primary=true
I0401 20:38:00.255729 228685 ops.go:34] apiserver oom_adj: -16
I0401 20:38:00.256356 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:00.757072 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:01.257155 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:01.756919 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:02.256455 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:02.756435 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:03.257399 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:03.756482 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:04.256906 228685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 20:38:04.491006 228685 kubeadm.go:1113] duration metric: took 4.534237119s to wait for elevateKubeSystemPrivileges
I0401 20:38:04.491040 228685 kubeadm.go:394] duration metric: took 22.902200469s to StartCluster
I0401 20:38:04.491057 228685 settings.go:142] acquiring lock: {Name:mke009045444eed25507a29a5243ce88f8891cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:38:04.491119 228685 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20506-2281/kubeconfig
I0401 20:38:04.492586 228685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-2281/kubeconfig: {Name:mkf36432e76eb80fc7384359f87ed1051bb3861b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0401 20:38:04.492815 228685 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0401 20:38:04.492962 228685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0401 20:38:04.493206 228685 config.go:182] Loaded profile config "embed-certs-797670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0401 20:38:04.493249 228685 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0401 20:38:04.493322 228685 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-797670"
I0401 20:38:04.493342 228685 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-797670"
I0401 20:38:04.493370 228685 host.go:66] Checking if "embed-certs-797670" exists ...
I0401 20:38:04.494004 228685 addons.go:69] Setting default-storageclass=true in profile "embed-certs-797670"
I0401 20:38:04.494028 228685 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-797670"
I0401 20:38:04.494082 228685 cli_runner.go:164] Run: docker container inspect embed-certs-797670 --format={{.State.Status}}
I0401 20:38:04.494349 228685 cli_runner.go:164] Run: docker container inspect embed-certs-797670 --format={{.State.Status}}
I0401 20:38:04.499296 228685 out.go:177] * Verifying Kubernetes components...
I0401 20:38:04.506014 228685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0401 20:38:04.538525 228685 addons.go:238] Setting addon default-storageclass=true in "embed-certs-797670"
I0401 20:38:04.538570 228685 host.go:66] Checking if "embed-certs-797670" exists ...
I0401 20:38:04.538985 228685 cli_runner.go:164] Run: docker container inspect embed-certs-797670 --format={{.State.Status}}
I0401 20:38:04.559797 228685 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0401 20:38:04.562845 228685 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0401 20:38:04.562868 228685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0401 20:38:04.562937 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:38:04.581171 228685 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0401 20:38:04.581197 228685 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0401 20:38:04.581258 228685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-797670
I0401 20:38:04.604876 228685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa Username:docker}
I0401 20:38:04.622655 228685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/20506-2281/.minikube/machines/embed-certs-797670/id_rsa Username:docker}
I0401 20:38:04.966230 228685 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0401 20:38:04.966681 228685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0401 20:38:05.022551 228685 node_ready.go:35] waiting up to 6m0s for node "embed-certs-797670" to be "Ready" ...
I0401 20:38:05.040700 228685 node_ready.go:49] node "embed-certs-797670" has status "Ready":"True"
I0401 20:38:05.040773 228685 node_ready.go:38] duration metric: took 18.137652ms for node "embed-certs-797670" to be "Ready" ...
I0401 20:38:05.040799 228685 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0401 20:38:05.044546 228685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0401 20:38:05.046310 228685 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xhbpl" in "kube-system" namespace to be "Ready" ...
I0401 20:38:05.067932 228685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0401 20:38:05.806036 228685 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
I0401 20:38:06.310119 228685 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-797670" context rescaled to 1 replicas
I0401 20:38:06.509851 228685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.441831945s)
I0401 20:38:06.512804 228685 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0401 20:38:05.339774 219087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0401 20:38:05.356670 219087 api_server.go:72] duration metric: took 5m48.852325321s to wait for apiserver process to appear ...
I0401 20:38:05.356694 219087 api_server.go:88] waiting for apiserver healthz status ...
I0401 20:38:05.356731 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0401 20:38:05.356790 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0401 20:38:05.412265 219087 cri.go:89] found id: "aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4"
I0401 20:38:05.412286 219087 cri.go:89] found id: "5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:38:05.412291 219087 cri.go:89] found id: ""
I0401 20:38:05.412298 219087 logs.go:282] 2 containers: [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4 5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee]
I0401 20:38:05.412361 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.420695 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.424671 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0401 20:38:05.424741 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0401 20:38:05.495927 219087 cri.go:89] found id: "ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db"
I0401 20:38:05.495990 219087 cri.go:89] found id: "6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:38:05.496010 219087 cri.go:89] found id: ""
I0401 20:38:05.496035 219087 logs.go:282] 2 containers: [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db 6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94]
I0401 20:38:05.496128 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.502060 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.508650 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0401 20:38:05.508722 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0401 20:38:05.575950 219087 cri.go:89] found id: "39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50"
I0401 20:38:05.575971 219087 cri.go:89] found id: "bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:38:05.575976 219087 cri.go:89] found id: ""
I0401 20:38:05.575984 219087 logs.go:282] 2 containers: [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50 bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73]
I0401 20:38:05.576044 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.580291 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.584219 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0401 20:38:05.584371 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0401 20:38:05.656471 219087 cri.go:89] found id: "c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c"
I0401 20:38:05.656545 219087 cri.go:89] found id: "1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:38:05.656565 219087 cri.go:89] found id: ""
I0401 20:38:05.656589 219087 logs.go:282] 2 containers: [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c 1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba]
I0401 20:38:05.656685 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.661249 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.667341 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0401 20:38:05.667458 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0401 20:38:05.809481 219087 cri.go:89] found id: "48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0"
I0401 20:38:05.809570 219087 cri.go:89] found id: "35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:38:05.809590 219087 cri.go:89] found id: ""
I0401 20:38:05.809614 219087 logs.go:282] 2 containers: [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0 35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428]
I0401 20:38:05.809719 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.813818 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.818602 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0401 20:38:05.818743 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0401 20:38:05.896895 219087 cri.go:89] found id: "28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593"
I0401 20:38:05.896984 219087 cri.go:89] found id: "4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:38:05.897004 219087 cri.go:89] found id: ""
I0401 20:38:05.897028 219087 logs.go:282] 2 containers: [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593 4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff]
I0401 20:38:05.897179 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.901154 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.905063 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0401 20:38:05.905252 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0401 20:38:05.975652 219087 cri.go:89] found id: "094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4"
I0401 20:38:05.975729 219087 cri.go:89] found id: "e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:38:05.975748 219087 cri.go:89] found id: ""
I0401 20:38:05.975772 219087 logs.go:282] 2 containers: [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4 e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a]
I0401 20:38:05.975859 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.980325 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:05.984586 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0401 20:38:05.984728 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0401 20:38:06.065357 219087 cri.go:89] found id: "7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4"
I0401 20:38:06.065424 219087 cri.go:89] found id: ""
I0401 20:38:06.065455 219087 logs.go:282] 1 containers: [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4]
I0401 20:38:06.065547 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:06.071786 219087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0401 20:38:06.071956 219087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0401 20:38:06.135401 219087 cri.go:89] found id: "ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8"
I0401 20:38:06.135538 219087 cri.go:89] found id: "abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:38:06.135559 219087 cri.go:89] found id: ""
I0401 20:38:06.135592 219087 logs.go:282] 2 containers: [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8 abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829]
I0401 20:38:06.135730 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:06.142152 219087 ssh_runner.go:195] Run: which crictl
I0401 20:38:06.149630 219087 logs.go:123] Gathering logs for dmesg ...
I0401 20:38:06.149703 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0401 20:38:06.179676 219087 logs.go:123] Gathering logs for etcd [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db] ...
I0401 20:38:06.179755 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db"
I0401 20:38:06.266802 219087 logs.go:123] Gathering logs for kube-controller-manager [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593] ...
I0401 20:38:06.266873 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593"
I0401 20:38:06.360162 219087 logs.go:123] Gathering logs for kube-controller-manager [4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff] ...
I0401 20:38:06.360242 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff"
I0401 20:38:06.450106 219087 logs.go:123] Gathering logs for container status ...
I0401 20:38:06.450139 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0401 20:38:06.536857 219087 logs.go:123] Gathering logs for kube-scheduler [1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba] ...
I0401 20:38:06.536886 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba"
I0401 20:38:06.590418 219087 logs.go:123] Gathering logs for kube-proxy [35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428] ...
I0401 20:38:06.590449 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428"
I0401 20:38:06.643042 219087 logs.go:123] Gathering logs for describe nodes ...
I0401 20:38:06.643081 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0401 20:38:06.917656 219087 logs.go:123] Gathering logs for etcd [6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94] ...
I0401 20:38:06.917687 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94"
I0401 20:38:06.988674 219087 logs.go:123] Gathering logs for coredns [bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73] ...
I0401 20:38:06.988708 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73"
I0401 20:38:07.086393 219087 logs.go:123] Gathering logs for kindnet [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4] ...
I0401 20:38:07.086426 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4"
I0401 20:38:07.169164 219087 logs.go:123] Gathering logs for storage-provisioner [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8] ...
I0401 20:38:07.169269 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8"
I0401 20:38:07.251767 219087 logs.go:123] Gathering logs for containerd ...
I0401 20:38:07.251795 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0401 20:38:07.324610 219087 logs.go:123] Gathering logs for kube-proxy [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0] ...
I0401 20:38:07.324647 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0"
I0401 20:38:07.390912 219087 logs.go:123] Gathering logs for kindnet [e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a] ...
I0401 20:38:07.390941 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a"
I0401 20:38:07.439773 219087 logs.go:123] Gathering logs for kubernetes-dashboard [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4] ...
I0401 20:38:07.439807 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4"
I0401 20:38:07.506054 219087 logs.go:123] Gathering logs for storage-provisioner [abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829] ...
I0401 20:38:07.506099 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829"
I0401 20:38:07.563582 219087 logs.go:123] Gathering logs for kubelet ...
I0401 20:38:07.563617 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0401 20:38:07.627171 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.165613 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-kt7v7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-kt7v7" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:38:07.627435 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.165926 661 reflector.go:138] object-"kube-system"/"kindnet-token-4l6xv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-4l6xv" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:38:07.627671 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.166144 661 reflector.go:138] object-"default"/"default-token-8nfw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8nfw5" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:38:07.627897 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:34 old-k8s-version-018253 kubelet[661]: E0401 20:32:34.166422 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-018253" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-018253' and this object
W0401 20:38:07.636773 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:35 old-k8s-version-018253 kubelet[661]: E0401 20:32:35.405325 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.637337 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:35 old-k8s-version-018253 kubelet[661]: E0401 20:32:35.732359 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.640140 219087 logs.go:138] Found kubelet problem: Apr 01 20:32:49 old-k8s-version-018253 kubelet[661]: E0401 20:32:49.452865 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.641970 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:01 old-k8s-version-018253 kubelet[661]: E0401 20:33:01.460208 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.642932 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:07 old-k8s-version-018253 kubelet[661]: E0401 20:33:07.937980 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.643287 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:08 old-k8s-version-018253 kubelet[661]: E0401 20:33:08.943923 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.643643 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:09 old-k8s-version-018253 kubelet[661]: E0401 20:33:09.951616 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.646367 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:13 old-k8s-version-018253 kubelet[661]: E0401 20:33:13.447207 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.647329 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:24 old-k8s-version-018253 kubelet[661]: E0401 20:33:24.988675 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.647771 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:26 old-k8s-version-018253 kubelet[661]: E0401 20:33:26.441570 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.648125 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:28 old-k8s-version-018253 kubelet[661]: E0401 20:33:28.629070 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.648479 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:39 old-k8s-version-018253 kubelet[661]: E0401 20:33:39.437052 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.648749 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:41 old-k8s-version-018253 kubelet[661]: E0401 20:33:41.441575 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.649481 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:52 old-k8s-version-018253 kubelet[661]: E0401 20:33:52.087801 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.652404 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:56 old-k8s-version-018253 kubelet[661]: E0401 20:33:56.457822 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.652789 219087 logs.go:138] Found kubelet problem: Apr 01 20:33:58 old-k8s-version-018253 kubelet[661]: E0401 20:33:58.629551 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.653059 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:09 old-k8s-version-018253 kubelet[661]: E0401 20:34:09.437914 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.653438 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:10 old-k8s-version-018253 kubelet[661]: E0401 20:34:10.440924 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.653755 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:21 old-k8s-version-018253 kubelet[661]: E0401 20:34:21.437543 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.654175 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:25 old-k8s-version-018253 kubelet[661]: E0401 20:34:25.436995 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.654371 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:36 old-k8s-version-018253 kubelet[661]: E0401 20:34:36.438242 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.655026 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:41 old-k8s-version-018253 kubelet[661]: E0401 20:34:41.247792 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.655387 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:48 old-k8s-version-018253 kubelet[661]: E0401 20:34:48.629699 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.655577 219087 logs.go:138] Found kubelet problem: Apr 01 20:34:49 old-k8s-version-018253 kubelet[661]: E0401 20:34:49.437577 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.655906 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:00 old-k8s-version-018253 kubelet[661]: E0401 20:35:00.437297 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.656096 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:01 old-k8s-version-018253 kubelet[661]: E0401 20:35:01.437716 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.656462 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:11 old-k8s-version-018253 kubelet[661]: E0401 20:35:11.437081 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.656691 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:15 old-k8s-version-018253 kubelet[661]: E0401 20:35:15.437535 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.657055 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:25 old-k8s-version-018253 kubelet[661]: E0401 20:35:25.437035 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.659877 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:28 old-k8s-version-018253 kubelet[661]: E0401 20:35:28.447976 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0401 20:38:07.660251 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:37 old-k8s-version-018253 kubelet[661]: E0401 20:35:37.437132 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.660461 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:41 old-k8s-version-018253 kubelet[661]: E0401 20:35:41.437697 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.660815 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:49 old-k8s-version-018253 kubelet[661]: E0401 20:35:49.437570 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.661040 219087 logs.go:138] Found kubelet problem: Apr 01 20:35:56 old-k8s-version-018253 kubelet[661]: E0401 20:35:56.438417 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.661656 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:04 old-k8s-version-018253 kubelet[661]: E0401 20:36:04.499741 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.662017 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:08 old-k8s-version-018253 kubelet[661]: E0401 20:36:08.629509 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.662240 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:10 old-k8s-version-018253 kubelet[661]: E0401 20:36:10.437611 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.662605 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:19 old-k8s-version-018253 kubelet[661]: E0401 20:36:19.437200 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.662830 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:22 old-k8s-version-018253 kubelet[661]: E0401 20:36:22.437564 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.663183 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:30 old-k8s-version-018253 kubelet[661]: E0401 20:36:30.437610 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.663396 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:35 old-k8s-version-018253 kubelet[661]: E0401 20:36:35.437428 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.663747 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:45 old-k8s-version-018253 kubelet[661]: E0401 20:36:45.437026 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.663953 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:48 old-k8s-version-018253 kubelet[661]: E0401 20:36:48.437377 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.664315 219087 logs.go:138] Found kubelet problem: Apr 01 20:36:57 old-k8s-version-018253 kubelet[661]: E0401 20:36:57.437066 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.664523 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:02 old-k8s-version-018253 kubelet[661]: E0401 20:37:02.437440 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.664877 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:10 old-k8s-version-018253 kubelet[661]: E0401 20:37:10.437686 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.665086 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:15 old-k8s-version-018253 kubelet[661]: E0401 20:37:15.437747 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.665489 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:23 old-k8s-version-018253 kubelet[661]: E0401 20:37:23.437103 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.665700 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:27 old-k8s-version-018253 kubelet[661]: E0401 20:37:27.437416 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.666056 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: E0401 20:37:35.438272 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.666510 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.666882 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.667105 219087 logs.go:138] Found kubelet problem: Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.667455 219087 logs.go:138] Found kubelet problem: Apr 01 20:38:01 old-k8s-version-018253 kubelet[661]: E0401 20:38:01.437151 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.667667 219087 logs.go:138] Found kubelet problem: Apr 01 20:38:05 old-k8s-version-018253 kubelet[661]: E0401 20:38:05.438853 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0401 20:38:07.667684 219087 logs.go:123] Gathering logs for kube-apiserver [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4] ...
I0401 20:38:07.667713 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4"
I0401 20:38:07.777836 219087 logs.go:123] Gathering logs for kube-apiserver [5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee] ...
I0401 20:38:07.777871 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee"
I0401 20:38:07.867675 219087 logs.go:123] Gathering logs for coredns [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50] ...
I0401 20:38:07.867709 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50"
I0401 20:38:07.919313 219087 logs.go:123] Gathering logs for kube-scheduler [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c] ...
I0401 20:38:07.919346 219087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c"
I0401 20:38:07.992501 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:38:07.992593 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0401 20:38:07.992718 219087 out.go:270] X Problems detected in kubelet:
W0401 20:38:07.992769 219087 out.go:270] Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.992805 219087 out.go:270] Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.992864 219087 out.go:270] Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0401 20:38:07.992925 219087 out.go:270] Apr 01 20:38:01 old-k8s-version-018253 kubelet[661]: E0401 20:38:01.437151 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
W0401 20:38:07.992973 219087 out.go:270] Apr 01 20:38:05 old-k8s-version-018253 kubelet[661]: E0401 20:38:05.438853 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0401 20:38:07.993039 219087 out.go:358] Setting ErrFile to fd 2...
I0401 20:38:07.993064 219087 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 20:38:06.515505 228685 addons.go:514] duration metric: took 2.022246155s for enable addons: enabled=[default-storageclass storage-provisioner]
I0401 20:38:06.550240 228685 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-xhbpl" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xhbpl" not found
I0401 20:38:06.550272 228685 pod_ready.go:82] duration metric: took 1.503881978s for pod "coredns-668d6bf9bc-xhbpl" in "kube-system" namespace to be "Ready" ...
E0401 20:38:06.550285 228685 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-xhbpl" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xhbpl" not found
I0401 20:38:06.550292 228685 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zhfdm" in "kube-system" namespace to be "Ready" ...
I0401 20:38:08.556355 228685 pod_ready.go:103] pod "coredns-668d6bf9bc-zhfdm" in "kube-system" namespace has status "Ready":"False"
I0401 20:38:11.055347 228685 pod_ready.go:103] pod "coredns-668d6bf9bc-zhfdm" in "kube-system" namespace has status "Ready":"False"
I0401 20:38:13.056019 228685 pod_ready.go:103] pod "coredns-668d6bf9bc-zhfdm" in "kube-system" namespace has status "Ready":"False"
I0401 20:38:15.056782 228685 pod_ready.go:103] pod "coredns-668d6bf9bc-zhfdm" in "kube-system" namespace has status "Ready":"False"
I0401 20:38:17.993573 219087 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0401 20:38:18.011180 219087 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0401 20:38:18.014651 219087 out.go:201]
W0401 20:38:18.017459 219087 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0401 20:38:18.017521 219087 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0401 20:38:18.017540 219087 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0401 20:38:18.017545 219087 out.go:270] *
W0401 20:38:18.018459 219087 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0401 20:38:18.020413 219087 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
0a9a0e51230c7 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 b8ea4444a24c5 dashboard-metrics-scraper-8d5bb5db8-rsgcp
7297811dd6164 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 e31f84d555ee5 kubernetes-dashboard-cd95d586-lssnx
39bd8cf9d17b1 db91994f4ee8f 5 minutes ago Running coredns 1 dd16443b30b70 coredns-74ff55c5b-b77sm
094ab331c66b8 ee75e27fff91c 5 minutes ago Running kindnet-cni 1 9d2fffac010df kindnet-njjwt
7172a4dccd639 1611cd07b61d5 5 minutes ago Running busybox 1 ba607e850a5b3 busybox
ebe8878927f1c ba04bb24b9575 5 minutes ago Running storage-provisioner 1 39335a8561844 storage-provisioner
48043ed1c0d33 25a5233254979 5 minutes ago Running kube-proxy 1 0dc85cb93c764 kube-proxy-2mx7v
c059e0c3f06b1 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 fb312ae3ee934 kube-scheduler-old-k8s-version-018253
28695298acf55 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 a38ba799d82cd kube-controller-manager-old-k8s-version-018253
aaf9777330021 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 8731ad4ede5ba kube-apiserver-old-k8s-version-018253
ae43f06d83c40 05b738aa1bc63 5 minutes ago Running etcd 1 a3d8301687c5d etcd-old-k8s-version-018253
85c9427473d53 1611cd07b61d5 6 minutes ago Exited busybox 0 bced4a70ee164 busybox
bd8535e9d4ce9 db91994f4ee8f 7 minutes ago Exited coredns 0 ca4e67a3c3c0b coredns-74ff55c5b-b77sm
e8305ae5f809b ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 0440931c7ca45 kindnet-njjwt
abdf3660fdc64 ba04bb24b9575 8 minutes ago Exited storage-provisioner 0 6831c22391a60 storage-provisioner
35abde900ec6a 25a5233254979 8 minutes ago Exited kube-proxy 0 27cf76fc2992a kube-proxy-2mx7v
4ba6110fd73ed 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 f0db5db7f6bd9 kube-controller-manager-old-k8s-version-018253
5141f0dd46c4f 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 f0193c5f66af7 kube-apiserver-old-k8s-version-018253
1e2e7d43eb567 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 5d51abf8912d4 kube-scheduler-old-k8s-version-018253
6583a62c81f21 05b738aa1bc63 8 minutes ago Exited etcd 0 3c146a8ce17cf etcd-old-k8s-version-018253
==> containerd <==
Apr 01 20:34:40 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:34:40.562683518Z" level=info msg="StartContainer for \"3efa3c7e02df3f7c65e5cb857a4f66e42b9f94154395a64d97e343948a646b2e\" returns successfully"
Apr 01 20:34:40 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:34:40.562736712Z" level=info msg="received exit event container_id:\"3efa3c7e02df3f7c65e5cb857a4f66e42b9f94154395a64d97e343948a646b2e\" id:\"3efa3c7e02df3f7c65e5cb857a4f66e42b9f94154395a64d97e343948a646b2e\" pid:2972 exit_status:255 exited_at:{seconds:1743539680 nanos:561638838}"
Apr 01 20:34:40 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:34:40.594473187Z" level=info msg="shim disconnected" id=3efa3c7e02df3f7c65e5cb857a4f66e42b9f94154395a64d97e343948a646b2e namespace=k8s.io
Apr 01 20:34:40 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:34:40.594533561Z" level=warning msg="cleaning up after shim disconnected" id=3efa3c7e02df3f7c65e5cb857a4f66e42b9f94154395a64d97e343948a646b2e namespace=k8s.io
Apr 01 20:34:40 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:34:40.594544408Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 01 20:34:41 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:34:41.250330554Z" level=info msg="RemoveContainer for \"de8e53263fd2007c8ba8d8c1e9d9031cf19689d8518dd902820ada67ce779015\""
Apr 01 20:34:41 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:34:41.260683502Z" level=info msg="RemoveContainer for \"de8e53263fd2007c8ba8d8c1e9d9031cf19689d8518dd902820ada67ce779015\" returns successfully"
Apr 01 20:35:28 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:35:28.438110900Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:35:28 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:35:28.445456944Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Apr 01 20:35:28 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:35:28.447523486Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 01 20:35:28 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:35:28.447537205Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 01 20:36:03 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:03.439981478Z" level=info msg="CreateContainer within sandbox \"b8ea4444a24c5fe82ece80b7cb2f3e4508fe5924f350af95fd29c002a0e0cbca\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Apr 01 20:36:03 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:03.458746257Z" level=info msg="CreateContainer within sandbox \"b8ea4444a24c5fe82ece80b7cb2f3e4508fe5924f350af95fd29c002a0e0cbca\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011\""
Apr 01 20:36:03 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:03.459366854Z" level=info msg="StartContainer for \"0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011\""
Apr 01 20:36:03 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:03.542569457Z" level=info msg="StartContainer for \"0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011\" returns successfully"
Apr 01 20:36:03 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:03.544599568Z" level=info msg="received exit event container_id:\"0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011\" id:\"0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011\" pid:3202 exit_status:255 exited_at:{seconds:1743539763 nanos:544347259}"
Apr 01 20:36:03 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:03.592746277Z" level=info msg="shim disconnected" id=0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011 namespace=k8s.io
Apr 01 20:36:03 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:03.592819205Z" level=warning msg="cleaning up after shim disconnected" id=0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011 namespace=k8s.io
Apr 01 20:36:03 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:03.592830323Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 01 20:36:04 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:04.501119176Z" level=info msg="RemoveContainer for \"3efa3c7e02df3f7c65e5cb857a4f66e42b9f94154395a64d97e343948a646b2e\""
Apr 01 20:36:04 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:36:04.507885409Z" level=info msg="RemoveContainer for \"3efa3c7e02df3f7c65e5cb857a4f66e42b9f94154395a64d97e343948a646b2e\" returns successfully"
Apr 01 20:38:17 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:38:17.437945778Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:38:17 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:38:17.445705152Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Apr 01 20:38:17 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:38:17.447812095Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 01 20:38:17 old-k8s-version-018253 containerd[567]: time="2025-04-01T20:38:17.447858290Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [39bd8cf9d17b195f8acc0fa2bdd7e6bc3e5a44c6b1e145d92b35e943ee932d50] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:48864 - 478 "HINFO IN 4247030751065306606.46334965601787634. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.004395062s
==> coredns [bd8535e9d4ce99d5d1f457db31b0c90c2d3c7832381a254abe41121490203e73] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:52341 - 56570 "HINFO IN 9150338518441253788.6048072791119216540. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.049939108s
==> describe nodes <==
Name: old-k8s-version-018253
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-018253
kubernetes.io/os=linux
minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
minikube.k8s.io/name=old-k8s-version-018253
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_01T20_30_01_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 01 Apr 2025 20:29:56 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-018253
AcquireTime: <unset>
RenewTime: Tue, 01 Apr 2025 20:38:16 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 01 Apr 2025 20:33:24 +0000 Tue, 01 Apr 2025 20:29:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 01 Apr 2025 20:33:24 +0000 Tue, 01 Apr 2025 20:29:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 01 Apr 2025 20:33:24 +0000 Tue, 01 Apr 2025 20:29:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 01 Apr 2025 20:33:24 +0000 Tue, 01 Apr 2025 20:30:15 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-018253
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: 5e808e8b74ab4a7989ab3cfdbaf158a1
System UUID: 7f7f1d1d-5c45-4222-9c3e-62237bfff9ef
Boot ID: 7539f720-cdb6-4e3a-b907-4ec5d3755b2d
Kernel Version: 5.15.0-1081-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.25
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m32s
kube-system coredns-74ff55c5b-b77sm 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m4s
kube-system etcd-old-k8s-version-018253 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m11s
kube-system kindnet-njjwt 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m4s
kube-system kube-apiserver-old-k8s-version-018253 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m11s
kube-system kube-controller-manager-old-k8s-version-018253 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m11s
kube-system kube-proxy-2mx7v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m4s
kube-system kube-scheduler-old-k8s-version-018253 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m11s
kube-system metrics-server-9975d5f86-xxnsk 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m22s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m2s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-rsgcp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m27s
kubernetes-dashboard kubernetes-dashboard-cd95d586-lssnx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m27s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m31s (x5 over 8m31s) kubelet Node old-k8s-version-018253 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m31s (x5 over 8m31s) kubelet Node old-k8s-version-018253 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m31s (x4 over 8m31s) kubelet Node old-k8s-version-018253 status is now: NodeHasSufficientPID
Normal Starting 8m11s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m11s kubelet Node old-k8s-version-018253 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m11s kubelet Node old-k8s-version-018253 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m11s kubelet Node old-k8s-version-018253 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m11s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m4s kubelet Node old-k8s-version-018253 status is now: NodeReady
Normal Starting 8m3s kube-proxy Starting kube-proxy.
Normal Starting 5m55s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m55s (x8 over 5m55s) kubelet Node old-k8s-version-018253 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m55s (x8 over 5m55s) kubelet Node old-k8s-version-018253 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m55s (x7 over 5m55s) kubelet Node old-k8s-version-018253 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m55s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m44s kube-proxy Starting kube-proxy.
==> dmesg <==
[Apr 1 19:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.013876] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.511505] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.038716] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.735203] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.310239] kauditd_printk_skb: 36 callbacks suppressed
[Apr 1 19:48] hrtimer: interrupt took 10367396 ns
[Apr 1 20:21] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
==> etcd [6583a62c81f21a717077b52b581873a2551bfbad62308a3c7ddc2bed09afac94] <==
raft2025/04/01 20:29:49 INFO: ea7e25599daad906 is starting a new election at term 1
raft2025/04/01 20:29:49 INFO: ea7e25599daad906 became candidate at term 2
raft2025/04/01 20:29:49 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/04/01 20:29:49 INFO: ea7e25599daad906 became leader at term 2
raft2025/04/01 20:29:49 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-04-01 20:29:49.789728 I | etcdserver: setting up the initial cluster version to 3.4
2025-04-01 20:29:49.790940 N | etcdserver/membership: set the initial cluster version to 3.4
2025-04-01 20:29:49.791122 I | etcdserver/api: enabled capabilities for version 3.4
2025-04-01 20:29:49.791255 I | etcdserver: published {Name:old-k8s-version-018253 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-04-01 20:29:49.791452 I | embed: ready to serve client requests
2025-04-01 20:29:49.794584 I | embed: serving client requests on 192.168.76.2:2379
2025-04-01 20:29:49.842686 I | embed: ready to serve client requests
2025-04-01 20:29:49.856904 I | embed: serving client requests on 127.0.0.1:2379
2025-04-01 20:29:59.173423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:30:12.029036 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:30:18.224877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:30:28.224824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:30:38.225005 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:30:48.225080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:30:58.224915 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:31:08.224731 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:31:18.225071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:31:28.224895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:31:38.224825 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:31:48.224768 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [ae43f06d83c40421f363cdbd86627442ad2067a03571ce0ce1b8e47e8d53f0db] <==
2025-04-01 20:34:10.180849 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:34:20.181166 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:34:30.180898 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:34:40.180924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:34:50.181088 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:35:00.185173 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:35:10.180913 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:35:20.180983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:35:30.180847 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:35:40.180762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:35:50.180995 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:36:00.189547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:36:10.180756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:36:20.180883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:36:30.181105 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:36:40.180846 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:36:50.180737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:37:00.181474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:37:10.181028 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:37:20.180874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:37:30.180869 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:37:40.180835 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:37:50.180993 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:38:00.181085 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-01 20:38:10.180730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
20:38:20 up 1:20, 0 users, load average: 1.90, 1.84, 2.34
Linux old-k8s-version-018253 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [094ab331c66b81e976edca29531d811335dbb95c8c2868f7725411732b1983f4] <==
I0401 20:36:17.153411 1 main.go:301] handling current node
I0401 20:36:27.154726 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:36:27.154761 1 main.go:301] handling current node
I0401 20:36:37.146811 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:36:37.146846 1 main.go:301] handling current node
I0401 20:36:47.145817 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:36:47.145856 1 main.go:301] handling current node
I0401 20:36:57.154475 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:36:57.154576 1 main.go:301] handling current node
I0401 20:37:07.154509 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:37:07.154548 1 main.go:301] handling current node
I0401 20:37:17.151157 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:37:17.151190 1 main.go:301] handling current node
I0401 20:37:27.154694 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:37:27.154787 1 main.go:301] handling current node
I0401 20:37:37.145803 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:37:37.145863 1 main.go:301] handling current node
I0401 20:37:47.153035 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:37:47.153387 1 main.go:301] handling current node
I0401 20:37:57.154544 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:37:57.154582 1 main.go:301] handling current node
I0401 20:38:07.151498 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:38:07.151531 1 main.go:301] handling current node
I0401 20:38:17.154729 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:38:17.154764 1 main.go:301] handling current node
==> kindnet [e8305ae5f809be45fe3f16c1db9d261b4b9d3f6ce170c38249fe147e635b5f5a] <==
I0401 20:30:18.931601 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I0401 20:30:19.350015 1 controller.go:361] Starting controller kube-network-policies
I0401 20:30:19.350036 1 controller.go:365] Waiting for informer caches to sync
I0401 20:30:19.350042 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0401 20:30:19.450968 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0401 20:30:19.451029 1 metrics.go:61] Registering metrics
I0401 20:30:19.451150 1 controller.go:401] Syncing nftables rules
I0401 20:30:29.358265 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:30:29.358309 1 main.go:301] handling current node
I0401 20:30:39.349834 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:30:39.349871 1 main.go:301] handling current node
I0401 20:30:49.357357 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:30:49.357391 1 main.go:301] handling current node
I0401 20:30:59.357022 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:30:59.357062 1 main.go:301] handling current node
I0401 20:31:09.350443 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:31:09.350502 1 main.go:301] handling current node
I0401 20:31:19.350066 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:31:19.350105 1 main.go:301] handling current node
I0401 20:31:29.352986 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:31:29.353023 1 main.go:301] handling current node
I0401 20:31:39.354256 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:31:39.354289 1 main.go:301] handling current node
I0401 20:31:49.349814 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0401 20:31:49.349848 1 main.go:301] handling current node
==> kube-apiserver [5141f0dd46c4ff344b02546cb2becc9f896c7a1568cb6acea8c75e2c2d94caee] <==
I0401 20:29:57.528214 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0401 20:29:57.528244 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0401 20:29:57.549473 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0401 20:29:57.554558 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0401 20:29:57.554758 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0401 20:29:58.059033 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0401 20:29:58.106978 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0401 20:29:58.184840 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0401 20:29:58.186337 1 controller.go:606] quota admission added evaluator for: endpoints
I0401 20:29:58.190792 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0401 20:29:58.518463 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0401 20:29:59.305695 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0401 20:29:59.874821 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0401 20:29:59.952158 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0401 20:30:15.274612 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0401 20:30:15.281089 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0401 20:30:33.233053 1 client.go:360] parsed scheme: "passthrough"
I0401 20:30:33.233108 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:30:33.233117 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0401 20:31:11.895258 1 client.go:360] parsed scheme: "passthrough"
I0401 20:31:11.895445 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:31:11.895464 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0401 20:31:43.645273 1 client.go:360] parsed scheme: "passthrough"
I0401 20:31:43.645315 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:31:43.645324 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [aaf9777330021529282e86fa98eb575fbb17f1c75d8aea86c3b1f56251a51dd4] <==
I0401 20:34:31.129195 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:34:31.129205 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0401 20:35:13.801366 1 client.go:360] parsed scheme: "passthrough"
I0401 20:35:13.801409 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:35:13.801418 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0401 20:35:36.740778 1 handler_proxy.go:102] no RequestInfo found in the context
E0401 20:35:36.740855 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0401 20:35:36.740866 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0401 20:35:53.690147 1 client.go:360] parsed scheme: "passthrough"
I0401 20:35:53.690201 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:35:53.690376 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0401 20:36:26.579074 1 client.go:360] parsed scheme: "passthrough"
I0401 20:36:26.579122 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:36:26.579131 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0401 20:37:10.563619 1 client.go:360] parsed scheme: "passthrough"
I0401 20:37:10.563667 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:37:10.563676 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0401 20:37:35.147563 1 handler_proxy.go:102] no RequestInfo found in the context
E0401 20:37:35.147660 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0401 20:37:35.147671 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0401 20:37:43.864697 1 client.go:360] parsed scheme: "passthrough"
I0401 20:37:43.864749 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0401 20:37:43.864759 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [28695298acf559521ce50098b456d5d0e17c655ad2e93b21ddf39e18ff414593] <==
W0401 20:33:58.233924 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0401 20:34:24.184817 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0401 20:34:29.884374 1 request.go:655] Throttling request took 1.047466361s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0401 20:34:30.735854 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0401 20:34:54.686835 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0401 20:35:02.386371 1 request.go:655] Throttling request took 1.048350008s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
W0401 20:35:03.238127 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0401 20:35:25.188795 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0401 20:35:34.888652 1 request.go:655] Throttling request took 1.048453034s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W0401 20:35:35.740171 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0401 20:35:55.691036 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0401 20:36:07.390700 1 request.go:655] Throttling request took 1.048363213s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0401 20:36:08.242346 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0401 20:36:26.193386 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0401 20:36:39.892848 1 request.go:655] Throttling request took 1.048407734s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
W0401 20:36:40.744320 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0401 20:36:56.695517 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0401 20:37:12.394702 1 request.go:655] Throttling request took 1.048433925s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
W0401 20:37:13.246328 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0401 20:37:27.197466 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0401 20:37:44.896827 1 request.go:655] Throttling request took 1.047899579s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0401 20:37:45.750661 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0401 20:37:57.699230 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0401 20:38:17.401193 1 request.go:655] Throttling request took 1.048463202s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0401 20:38:18.253029 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [4ba6110fd73ed5a1c1c02409f4e7af17dce60c28d088fc5bb20aab84d1b307ff] <==
I0401 20:30:15.279270 1 shared_informer.go:247] Caches are synced for service account
I0401 20:30:15.299722 1 shared_informer.go:247] Caches are synced for job
I0401 20:30:15.301382 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0401 20:30:15.309186 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0401 20:30:15.316024 1 shared_informer.go:247] Caches are synced for expand
I0401 20:30:15.353794 1 shared_informer.go:247] Caches are synced for persistent volume
I0401 20:30:15.372988 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9fqm9"
I0401 20:30:15.392863 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2mx7v"
I0401 20:30:15.409895 1 shared_informer.go:247] Caches are synced for stateful set
I0401 20:30:15.416368 1 shared_informer.go:247] Caches are synced for disruption
I0401 20:30:15.416394 1 disruption.go:339] Sending events to api server.
I0401 20:30:15.417036 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-njjwt"
I0401 20:30:15.423113 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-b77sm"
I0401 20:30:15.483117 1 shared_informer.go:247] Caches are synced for resource quota
I0401 20:30:15.506852 1 shared_informer.go:247] Caches are synced for resource quota
I0401 20:30:15.514324 1 shared_informer.go:247] Caches are synced for HPA
I0401 20:30:15.629740 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0401 20:30:15.935871 1 shared_informer.go:247] Caches are synced for garbage collector
I0401 20:30:15.954359 1 shared_informer.go:247] Caches are synced for garbage collector
I0401 20:30:15.954387 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0401 20:30:17.102899 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0401 20:30:17.152470 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-9fqm9"
I0401 20:30:20.236969 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0401 20:31:56.228042 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0401 20:31:56.338654 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [35abde900ec6aac8dd6eafa4d9809ef5e3971412126aa7aeb42cbd246bdad428] <==
I0401 20:30:16.506625 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0401 20:30:16.506725 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0401 20:30:16.542501 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0401 20:30:16.542591 1 server_others.go:185] Using iptables Proxier.
I0401 20:30:16.542806 1 server.go:650] Version: v1.20.0
I0401 20:30:16.544516 1 config.go:315] Starting service config controller
I0401 20:30:16.544529 1 shared_informer.go:240] Waiting for caches to sync for service config
I0401 20:30:16.544899 1 config.go:224] Starting endpoint slice config controller
I0401 20:30:16.544909 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0401 20:30:16.645869 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0401 20:30:16.645944 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [48043ed1c0d336d1cc5be0293e5241650293edb3811c010e5907a14f537274f0] <==
I0401 20:32:35.794841 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0401 20:32:35.794919 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0401 20:32:35.815891 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0401 20:32:35.816007 1 server_others.go:185] Using iptables Proxier.
I0401 20:32:35.816399 1 server.go:650] Version: v1.20.0
I0401 20:32:35.817088 1 config.go:315] Starting service config controller
I0401 20:32:35.817159 1 shared_informer.go:240] Waiting for caches to sync for service config
I0401 20:32:35.817994 1 config.go:224] Starting endpoint slice config controller
I0401 20:32:35.818069 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0401 20:32:35.917310 1 shared_informer.go:247] Caches are synced for service config
I0401 20:32:35.918242 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [1e2e7d43eb567b10afc79295173df84a9ef8d888458823df6dd394978eb20dba] <==
I0401 20:29:52.882826 1 serving.go:331] Generated self-signed cert in-memory
W0401 20:29:56.674150 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0401 20:29:56.676991 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0401 20:29:56.677049 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0401 20:29:56.677074 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0401 20:29:56.765473 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0401 20:29:56.768971 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0401 20:29:56.769070 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 20:29:56.777289 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0401 20:29:56.828514 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0401 20:29:56.828889 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0401 20:29:56.829169 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0401 20:29:56.829377 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0401 20:29:56.829648 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0401 20:29:56.829936 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0401 20:29:56.830211 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0401 20:29:56.830478 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0401 20:29:56.830481 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0401 20:29:56.830753 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0401 20:29:56.830796 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0401 20:29:56.844768 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0401 20:29:57.658928 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0401 20:29:57.884146 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0401 20:30:00.678503 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [c059e0c3f06b1d4426a280472e3ae9f8a7c58a71e6ec67942f1a17172b55355c] <==
I0401 20:32:28.342511 1 serving.go:331] Generated self-signed cert in-memory
W0401 20:32:34.008137 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0401 20:32:34.008458 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0401 20:32:34.008567 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0401 20:32:34.008957 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0401 20:32:34.340275 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0401 20:32:34.340355 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 20:32:34.340364 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 20:32:34.340377 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0401 20:32:34.440823 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 01 20:36:48 old-k8s-version-018253 kubelet[661]: E0401 20:36:48.437377 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:36:57 old-k8s-version-018253 kubelet[661]: I0401 20:36:57.436683 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011
Apr 01 20:36:57 old-k8s-version-018253 kubelet[661]: E0401 20:36:57.437066 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:37:02 old-k8s-version-018253 kubelet[661]: E0401 20:37:02.437440 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:10 old-k8s-version-018253 kubelet[661]: I0401 20:37:10.436792 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011
Apr 01 20:37:10 old-k8s-version-018253 kubelet[661]: E0401 20:37:10.437686 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:37:15 old-k8s-version-018253 kubelet[661]: E0401 20:37:15.437747 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:23 old-k8s-version-018253 kubelet[661]: I0401 20:37:23.436713 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011
Apr 01 20:37:23 old-k8s-version-018253 kubelet[661]: E0401 20:37:23.437103 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:37:27 old-k8s-version-018253 kubelet[661]: E0401 20:37:27.437416 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: I0401 20:37:35.436886 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011
Apr 01 20:37:35 old-k8s-version-018253 kubelet[661]: E0401 20:37:35.438272 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:37:39 old-k8s-version-018253 kubelet[661]: E0401 20:37:39.439890 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: I0401 20:37:48.436788 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011
Apr 01 20:37:48 old-k8s-version-018253 kubelet[661]: E0401 20:37:48.437219 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:37:50 old-k8s-version-018253 kubelet[661]: E0401 20:37:50.437563 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:38:01 old-k8s-version-018253 kubelet[661]: I0401 20:38:01.436693 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011
Apr 01 20:38:01 old-k8s-version-018253 kubelet[661]: E0401 20:38:01.437151 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:38:05 old-k8s-version-018253 kubelet[661]: E0401 20:38:05.438853 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 01 20:38:16 old-k8s-version-018253 kubelet[661]: I0401 20:38:16.436826 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0a9a0e51230c70b063a20cfbb20c36a71708f5fff1fa9abd36342a2553251011
Apr 01 20:38:16 old-k8s-version-018253 kubelet[661]: E0401 20:38:16.437779 661 pod_workers.go:191] Error syncing pod 74344f28-71cf-4289-924b-659654a239bf ("dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rsgcp_kubernetes-dashboard(74344f28-71cf-4289-924b-659654a239bf)"
Apr 01 20:38:17 old-k8s-version-018253 kubelet[661]: E0401 20:38:17.448050 661 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 01 20:38:17 old-k8s-version-018253 kubelet[661]: E0401 20:38:17.448106 661 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 01 20:38:17 old-k8s-version-018253 kubelet[661]: E0401 20:38:17.448261 661 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-kfv2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad
5-d39a-493f-9aef-84aa72bd83cb): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 01 20:38:17 old-k8s-version-018253 kubelet[661]: E0401 20:38:17.448294 661 pod_workers.go:191] Error syncing pod f31f1ad5-d39a-493f-9aef-84aa72bd83cb ("metrics-server-9975d5f86-xxnsk_kube-system(f31f1ad5-d39a-493f-9aef-84aa72bd83cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
==> kubernetes-dashboard [7297811dd6164782dfa8fff0f6f3c05e2ac5da77f815c93c80abace35dc97aa4] <==
2025/04/01 20:32:59 Using namespace: kubernetes-dashboard
2025/04/01 20:32:59 Using in-cluster config to connect to apiserver
2025/04/01 20:32:59 Using secret token for csrf signing
2025/04/01 20:32:59 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/04/01 20:32:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/04/01 20:32:59 Successful initial request to the apiserver, version: v1.20.0
2025/04/01 20:32:59 Generating JWE encryption key
2025/04/01 20:32:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/04/01 20:32:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/04/01 20:33:00 Initializing JWE encryption key from synchronized object
2025/04/01 20:33:00 Creating in-cluster Sidecar client
2025/04/01 20:33:00 Serving insecurely on HTTP port: 9090
2025/04/01 20:33:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:33:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:34:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:34:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:35:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:35:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:36:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:36:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:37:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:37:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:38:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/01 20:32:59 Starting overwatch
==> storage-provisioner [abdf3660fdc6488e94e6fea01678f33d39462abff2f7007b4bc5726dffd5d829] <==
I0401 20:30:17.983842 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0401 20:30:18.019561 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0401 20:30:18.019649 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0401 20:30:18.033215 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0401 20:30:18.034246 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-018253_f7294e42-e1a1-4c5f-8516-9d09823a0cb7!
I0401 20:30:18.040684 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"604f28b0-20e7-418f-91b9-5e3f6f752814", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-018253_f7294e42-e1a1-4c5f-8516-9d09823a0cb7 became leader
I0401 20:30:18.134809 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-018253_f7294e42-e1a1-4c5f-8516-9d09823a0cb7!
==> storage-provisioner [ebe8878927f1c9429956642f789071d5122abc17dfb5cf48e02923760fe687f8] <==
I0401 20:32:36.202468 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0401 20:32:36.228016 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0401 20:32:36.228088 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0401 20:32:53.737658 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0401 20:32:53.738742 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-018253_44e03ea8-1d0d-440c-8ddc-95bac17b6e66!
I0401 20:32:53.738867 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"604f28b0-20e7-418f-91b9-5e3f6f752814", APIVersion:"v1", ResourceVersion:"759", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-018253_44e03ea8-1d0d-440c-8ddc-95bac17b6e66 became leader
I0401 20:32:53.843901 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-018253_44e03ea8-1d0d-440c-8ddc-95bac17b6e66!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-018253 -n old-k8s-version-018253
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-018253 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-xxnsk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-018253 describe pod metrics-server-9975d5f86-xxnsk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-018253 describe pod metrics-server-9975d5f86-xxnsk: exit status 1 (99.773257ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-xxnsk" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-018253 describe pod metrics-server-9975d5f86-xxnsk: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (372.74s)