=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-743648 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-743648 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m13.227449091s)
-- stdout --
* [old-k8s-version-743648] minikube v1.34.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=19872
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19872-2421/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-2421/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-743648" primary control-plane node in "old-k8s-version-743648" cluster
* Pulling base image v0.0.45-1730110049-19872 ...
* Restarting existing docker container for "old-k8s-version-743648" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-743648 addons enable metrics-server
* Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
-- /stdout --
** stderr **
I1028 17:50:19.991047 215146 out.go:345] Setting OutFile to fd 1 ...
I1028 17:50:19.991247 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:50:19.991274 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:50:19.991296 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:50:19.991675 215146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-2421/.minikube/bin
I1028 17:50:19.993059 215146 out.go:352] Setting JSON to false
I1028 17:50:19.995498 215146 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5566,"bootTime":1730132254,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1028 17:50:19.995601 215146 start.go:139] virtualization:
I1028 17:50:19.999133 215146 out.go:177] * [old-k8s-version-743648] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1028 17:50:20.003373 215146 out.go:177] - MINIKUBE_LOCATION=19872
I1028 17:50:20.003548 215146 notify.go:220] Checking for updates...
I1028 17:50:20.026125 215146 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1028 17:50:20.028897 215146 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19872-2421/kubeconfig
I1028 17:50:20.039505 215146 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-2421/.minikube
I1028 17:50:20.042165 215146 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1028 17:50:20.046316 215146 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1028 17:50:20.048863 215146 config.go:182] Loaded profile config "old-k8s-version-743648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1028 17:50:20.051473 215146 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
I1028 17:50:20.054193 215146 driver.go:394] Setting default libvirt URI to qemu:///system
I1028 17:50:20.107640 215146 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1028 17:50:20.107794 215146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1028 17:50:20.186026 215146 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-10-28 17:50:20.169536407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1028 17:50:20.186147 215146 docker.go:318] overlay module found
I1028 17:50:20.188735 215146 out.go:177] * Using the docker driver based on existing profile
I1028 17:50:20.190622 215146 start.go:297] selected driver: docker
I1028 17:50:20.190644 215146 start.go:901] validating driver "docker" against &{Name:old-k8s-version-743648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-743648 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 17:50:20.190776 215146 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1028 17:50:20.191556 215146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1028 17:50:20.284723 215146 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-10-28 17:50:20.273393374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1028 17:50:20.285170 215146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1028 17:50:20.285216 215146 cni.go:84] Creating CNI manager for ""
I1028 17:50:20.285270 215146 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1028 17:50:20.285327 215146 start.go:340] cluster config:
{Name:old-k8s-version-743648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-743648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 17:50:20.291045 215146 out.go:177] * Starting "old-k8s-version-743648" primary control-plane node in "old-k8s-version-743648" cluster
I1028 17:50:20.293792 215146 cache.go:121] Beginning downloading kic base image for docker with containerd
I1028 17:50:20.296171 215146 out.go:177] * Pulling base image v0.0.45-1730110049-19872 ...
I1028 17:50:20.298413 215146 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1028 17:50:20.298472 215146 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19872-2421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I1028 17:50:20.298486 215146 cache.go:56] Caching tarball of preloaded images
I1028 17:50:20.298506 215146 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local docker daemon
I1028 17:50:20.298590 215146 preload.go:172] Found /home/jenkins/minikube-integration/19872-2421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1028 17:50:20.298600 215146 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I1028 17:50:20.298717 215146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/config.json ...
I1028 17:50:20.319178 215146 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local docker daemon, skipping pull
I1028 17:50:20.319198 215146 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 exists in daemon, skipping load
I1028 17:50:20.319211 215146 cache.go:194] Successfully downloaded all kic artifacts
I1028 17:50:20.319235 215146 start.go:360] acquireMachinesLock for old-k8s-version-743648: {Name:mkbc946fc9b9661051d822ab54db26a6d4c41906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:50:20.319290 215146 start.go:364] duration metric: took 38.843µs to acquireMachinesLock for "old-k8s-version-743648"
I1028 17:50:20.319310 215146 start.go:96] Skipping create...Using existing machine configuration
I1028 17:50:20.319315 215146 fix.go:54] fixHost starting:
I1028 17:50:20.319564 215146 cli_runner.go:164] Run: docker container inspect old-k8s-version-743648 --format={{.State.Status}}
I1028 17:50:20.344198 215146 fix.go:112] recreateIfNeeded on old-k8s-version-743648: state=Stopped err=<nil>
W1028 17:50:20.344290 215146 fix.go:138] unexpected machine state, will restart: <nil>
I1028 17:50:20.347346 215146 out.go:177] * Restarting existing docker container for "old-k8s-version-743648" ...
I1028 17:50:20.349827 215146 cli_runner.go:164] Run: docker start old-k8s-version-743648
I1028 17:50:20.703211 215146 cli_runner.go:164] Run: docker container inspect old-k8s-version-743648 --format={{.State.Status}}
I1028 17:50:20.726819 215146 kic.go:430] container "old-k8s-version-743648" state is running.
I1028 17:50:20.727199 215146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-743648
I1028 17:50:20.769975 215146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/config.json ...
I1028 17:50:20.770192 215146 machine.go:93] provisionDockerMachine start ...
I1028 17:50:20.770264 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:20.795177 215146 main.go:141] libmachine: Using SSH client type: native
I1028 17:50:20.796195 215146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 33066 <nil> <nil>}
I1028 17:50:20.796226 215146 main.go:141] libmachine: About to run SSH command:
hostname
I1028 17:50:20.800337 215146 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1028 17:50:23.971790 215146 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-743648
I1028 17:50:23.971863 215146 ubuntu.go:169] provisioning hostname "old-k8s-version-743648"
I1028 17:50:23.971949 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:24.064759 215146 main.go:141] libmachine: Using SSH client type: native
I1028 17:50:24.065026 215146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 33066 <nil> <nil>}
I1028 17:50:24.065044 215146 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-743648 && echo "old-k8s-version-743648" | sudo tee /etc/hostname
I1028 17:50:24.244992 215146 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-743648
I1028 17:50:24.245112 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:24.268495 215146 main.go:141] libmachine: Using SSH client type: native
I1028 17:50:24.268765 215146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 33066 <nil> <nil>}
I1028 17:50:24.268784 215146 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-743648' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-743648/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-743648' | sudo tee -a /etc/hosts;
fi
fi
I1028 17:50:24.428378 215146 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1028 17:50:24.428403 215146 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19872-2421/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-2421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-2421/.minikube}
I1028 17:50:24.428467 215146 ubuntu.go:177] setting up certificates
I1028 17:50:24.428478 215146 provision.go:84] configureAuth start
I1028 17:50:24.428577 215146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-743648
I1028 17:50:24.449907 215146 provision.go:143] copyHostCerts
I1028 17:50:24.449989 215146 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-2421/.minikube/ca.pem, removing ...
I1028 17:50:24.450009 215146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-2421/.minikube/ca.pem
I1028 17:50:24.450079 215146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-2421/.minikube/ca.pem (1082 bytes)
I1028 17:50:24.450206 215146 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-2421/.minikube/cert.pem, removing ...
I1028 17:50:24.450216 215146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-2421/.minikube/cert.pem
I1028 17:50:24.450251 215146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-2421/.minikube/cert.pem (1123 bytes)
I1028 17:50:24.450305 215146 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-2421/.minikube/key.pem, removing ...
I1028 17:50:24.450315 215146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-2421/.minikube/key.pem
I1028 17:50:24.450340 215146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-2421/.minikube/key.pem (1679 bytes)
I1028 17:50:24.450391 215146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-2421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-743648 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-743648]
I1028 17:50:24.906539 215146 provision.go:177] copyRemoteCerts
I1028 17:50:24.906633 215146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1028 17:50:24.906703 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:24.926000 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:25.054838 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1028 17:50:25.091061 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1028 17:50:25.123922 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1028 17:50:25.155905 215146 provision.go:87] duration metric: took 727.406581ms to configureAuth
I1028 17:50:25.155982 215146 ubuntu.go:193] setting minikube options for container-runtime
I1028 17:50:25.156232 215146 config.go:182] Loaded profile config "old-k8s-version-743648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1028 17:50:25.156267 215146 machine.go:96] duration metric: took 4.386058077s to provisionDockerMachine
I1028 17:50:25.156304 215146 start.go:293] postStartSetup for "old-k8s-version-743648" (driver="docker")
I1028 17:50:25.156329 215146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1028 17:50:25.156434 215146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1028 17:50:25.156513 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:25.174762 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:25.278164 215146 ssh_runner.go:195] Run: cat /etc/os-release
I1028 17:50:25.282579 215146 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1028 17:50:25.282616 215146 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1028 17:50:25.282629 215146 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1028 17:50:25.282636 215146 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1028 17:50:25.282647 215146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-2421/.minikube/addons for local assets ...
I1028 17:50:25.282709 215146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-2421/.minikube/files for local assets ...
I1028 17:50:25.282795 215146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-2421/.minikube/files/etc/ssl/certs/78662.pem -> 78662.pem in /etc/ssl/certs
I1028 17:50:25.282901 215146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1028 17:50:25.292760 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/files/etc/ssl/certs/78662.pem --> /etc/ssl/certs/78662.pem (1708 bytes)
I1028 17:50:25.330080 215146 start.go:296] duration metric: took 173.747543ms for postStartSetup
I1028 17:50:25.330254 215146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1028 17:50:25.330334 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:25.352663 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:25.462342 215146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1028 17:50:25.467925 215146 fix.go:56] duration metric: took 5.148602418s for fixHost
I1028 17:50:25.467949 215146 start.go:83] releasing machines lock for "old-k8s-version-743648", held for 5.148651837s
I1028 17:50:25.468018 215146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-743648
I1028 17:50:25.488857 215146 ssh_runner.go:195] Run: cat /version.json
I1028 17:50:25.488879 215146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1028 17:50:25.488920 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:25.488931 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:25.509058 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:25.516871 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:25.612643 215146 ssh_runner.go:195] Run: systemctl --version
I1028 17:50:25.768794 215146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1028 17:50:25.775061 215146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1028 17:50:25.813893 215146 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1028 17:50:25.814065 215146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1028 17:50:25.824681 215146 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1028 17:50:25.824784 215146 start.go:495] detecting cgroup driver to use...
I1028 17:50:25.824831 215146 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1028 17:50:25.824914 215146 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1028 17:50:25.848276 215146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1028 17:50:25.861448 215146 docker.go:217] disabling cri-docker service (if available) ...
I1028 17:50:25.861562 215146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1028 17:50:25.878277 215146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1028 17:50:25.893391 215146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1028 17:50:26.002788 215146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1028 17:50:26.108432 215146 docker.go:233] disabling docker service ...
I1028 17:50:26.108499 215146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1028 17:50:26.122141 215146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1028 17:50:26.135019 215146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1028 17:50:26.222799 215146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1028 17:50:26.314513 215146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1028 17:50:26.327373 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1028 17:50:26.345072 215146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1028 17:50:26.356832 215146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1028 17:50:26.369274 215146 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1028 17:50:26.369418 215146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1028 17:50:26.379629 215146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1028 17:50:26.390409 215146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1028 17:50:26.404810 215146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1028 17:50:26.419769 215146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1028 17:50:26.429535 215146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1028 17:50:26.439084 215146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1028 17:50:26.447838 215146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1028 17:50:26.456575 215146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 17:50:26.564583 215146 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1028 17:50:26.825289 215146 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1028 17:50:26.825357 215146 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1028 17:50:26.829708 215146 start.go:563] Will wait 60s for crictl version
I1028 17:50:26.829773 215146 ssh_runner.go:195] Run: which crictl
I1028 17:50:26.833564 215146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1028 17:50:26.901023 215146 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1028 17:50:26.901102 215146 ssh_runner.go:195] Run: containerd --version
I1028 17:50:26.933653 215146 ssh_runner.go:195] Run: containerd --version
I1028 17:50:26.965162 215146 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
I1028 17:50:26.966982 215146 cli_runner.go:164] Run: docker network inspect old-k8s-version-743648 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 17:50:26.987582 215146 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1028 17:50:26.991459 215146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1028 17:50:27.002598 215146 kubeadm.go:883] updating cluster {Name:old-k8s-version-743648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-743648 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1028 17:50:27.002717 215146 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1028 17:50:27.002775 215146 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 17:50:27.070631 215146 containerd.go:627] all images are preloaded for containerd runtime.
I1028 17:50:27.070656 215146 containerd.go:534] Images already preloaded, skipping extraction
I1028 17:50:27.070714 215146 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 17:50:27.115982 215146 containerd.go:627] all images are preloaded for containerd runtime.
I1028 17:50:27.116005 215146 cache_images.go:84] Images are preloaded, skipping loading
I1028 17:50:27.116013 215146 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I1028 17:50:27.116130 215146 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-743648 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-743648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1028 17:50:27.116202 215146 ssh_runner.go:195] Run: sudo crictl info
I1028 17:50:27.178579 215146 cni.go:84] Creating CNI manager for ""
I1028 17:50:27.178607 215146 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1028 17:50:27.178618 215146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1028 17:50:27.178638 215146 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-743648 NodeName:old-k8s-version-743648 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1028 17:50:27.178782 215146 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-743648"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1028 17:50:27.178859 215146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I1028 17:50:27.204103 215146 binaries.go:44] Found k8s binaries, skipping transfer
I1028 17:50:27.204194 215146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1028 17:50:27.217118 215146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I1028 17:50:27.238643 215146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1028 17:50:27.261729 215146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I1028 17:50:27.289624 215146 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1028 17:50:27.298073 215146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1028 17:50:27.311532 215146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 17:50:27.405362 215146 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1028 17:50:27.421193 215146 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648 for IP: 192.168.76.2
I1028 17:50:27.421220 215146 certs.go:194] generating shared ca certs ...
I1028 17:50:27.421237 215146 certs.go:226] acquiring lock for ca certs: {Name:mk3457f8c75e004d4fb7865e732d1b8d6b3cdec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 17:50:27.421382 215146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-2421/.minikube/ca.key
I1028 17:50:27.421431 215146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-2421/.minikube/proxy-client-ca.key
I1028 17:50:27.421442 215146 certs.go:256] generating profile certs ...
I1028 17:50:27.421551 215146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/client.key
I1028 17:50:27.421637 215146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/apiserver.key.2257cf18
I1028 17:50:27.421685 215146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/proxy-client.key
I1028 17:50:27.421796 215146 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/7866.pem (1338 bytes)
W1028 17:50:27.421831 215146 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-2421/.minikube/certs/7866_empty.pem, impossibly tiny 0 bytes
I1028 17:50:27.421845 215146 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca-key.pem (1679 bytes)
I1028 17:50:27.421871 215146 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem (1082 bytes)
I1028 17:50:27.421900 215146 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/cert.pem (1123 bytes)
I1028 17:50:27.421926 215146 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/key.pem (1679 bytes)
I1028 17:50:27.421972 215146 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/files/etc/ssl/certs/78662.pem (1708 bytes)
I1028 17:50:27.422643 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1028 17:50:27.449952 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1028 17:50:27.476788 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1028 17:50:27.515058 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1028 17:50:27.569091 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I1028 17:50:27.604883 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1028 17:50:27.636363 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1028 17:50:27.674607 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/old-k8s-version-743648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1028 17:50:27.715447 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/files/etc/ssl/certs/78662.pem --> /usr/share/ca-certificates/78662.pem (1708 bytes)
I1028 17:50:27.745184 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1028 17:50:27.775170 215146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/certs/7866.pem --> /usr/share/ca-certificates/7866.pem (1338 bytes)
I1028 17:50:27.805078 215146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1028 17:50:27.829444 215146 ssh_runner.go:195] Run: openssl version
I1028 17:50:27.835701 215146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78662.pem && ln -fs /usr/share/ca-certificates/78662.pem /etc/ssl/certs/78662.pem"
I1028 17:50:27.847499 215146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78662.pem
I1028 17:50:27.851815 215146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:09 /usr/share/ca-certificates/78662.pem
I1028 17:50:27.851886 215146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78662.pem
I1028 17:50:27.859727 215146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78662.pem /etc/ssl/certs/3ec20f2e.0"
I1028 17:50:27.869455 215146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1028 17:50:27.881983 215146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1028 17:50:27.885937 215146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:01 /usr/share/ca-certificates/minikubeCA.pem
I1028 17:50:27.886018 215146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1028 17:50:27.893448 215146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1028 17:50:27.905547 215146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7866.pem && ln -fs /usr/share/ca-certificates/7866.pem /etc/ssl/certs/7866.pem"
I1028 17:50:27.918688 215146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7866.pem
I1028 17:50:27.923253 215146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:09 /usr/share/ca-certificates/7866.pem
I1028 17:50:27.923354 215146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7866.pem
I1028 17:50:27.931280 215146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7866.pem /etc/ssl/certs/51391683.0"
I1028 17:50:27.943335 215146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1028 17:50:27.947377 215146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1028 17:50:27.954733 215146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1028 17:50:27.963477 215146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1028 17:50:27.971061 215146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1028 17:50:27.978769 215146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1028 17:50:27.986162 215146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1028 17:50:27.993678 215146 kubeadm.go:392] StartCluster: {Name:old-k8s-version-743648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-743648 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 17:50:27.993804 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1028 17:50:27.993884 215146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1028 17:50:28.067677 215146 cri.go:89] found id: "697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:50:28.067725 215146 cri.go:89] found id: "eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:50:28.067730 215146 cri.go:89] found id: "e3ff6f6a8a7ec2119b85411852cf681bf9fbc4f5f358df624b7a98888ee23880"
I1028 17:50:28.067734 215146 cri.go:89] found id: "f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:50:28.067738 215146 cri.go:89] found id: "48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:50:28.067741 215146 cri.go:89] found id: "e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:50:28.067744 215146 cri.go:89] found id: "07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:50:28.067748 215146 cri.go:89] found id: "c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:50:28.067751 215146 cri.go:89] found id: ""
I1028 17:50:28.067812 215146 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1028 17:50:28.086566 215146 cri.go:116] JSON = null
W1028 17:50:28.086645 215146 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I1028 17:50:28.086741 215146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1028 17:50:28.098259 215146 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1028 17:50:28.098284 215146 kubeadm.go:593] restartPrimaryControlPlane start ...
I1028 17:50:28.098349 215146 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1028 17:50:28.108519 215146 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1028 17:50:28.109024 215146 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-743648" does not appear in /home/jenkins/minikube-integration/19872-2421/kubeconfig
I1028 17:50:28.109143 215146 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-2421/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-743648" cluster setting kubeconfig missing "old-k8s-version-743648" context setting]
I1028 17:50:28.109514 215146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-2421/kubeconfig: {Name:mk5d0caa294b9d2ca80eaff01c0ffe9a532db746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 17:50:28.111227 215146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1028 17:50:28.121702 215146 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I1028 17:50:28.121755 215146 kubeadm.go:597] duration metric: took 23.465225ms to restartPrimaryControlPlane
I1028 17:50:28.121767 215146 kubeadm.go:394] duration metric: took 128.101744ms to StartCluster
I1028 17:50:28.121782 215146 settings.go:142] acquiring lock: {Name:mk543e20b5eb6c4236b6d62e05ff811d7fc9498d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 17:50:28.121857 215146 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19872-2421/kubeconfig
I1028 17:50:28.122570 215146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-2421/kubeconfig: {Name:mk5d0caa294b9d2ca80eaff01c0ffe9a532db746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 17:50:28.122839 215146 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1028 17:50:28.123067 215146 config.go:182] Loaded profile config "old-k8s-version-743648": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1028 17:50:28.123184 215146 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1028 17:50:28.123362 215146 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-743648"
I1028 17:50:28.123383 215146 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-743648"
W1028 17:50:28.123390 215146 addons.go:243] addon storage-provisioner should already be in state true
I1028 17:50:28.123430 215146 host.go:66] Checking if "old-k8s-version-743648" exists ...
I1028 17:50:28.123450 215146 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-743648"
I1028 17:50:28.123473 215146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-743648"
I1028 17:50:28.123767 215146 cli_runner.go:164] Run: docker container inspect old-k8s-version-743648 --format={{.State.Status}}
I1028 17:50:28.124122 215146 cli_runner.go:164] Run: docker container inspect old-k8s-version-743648 --format={{.State.Status}}
I1028 17:50:28.124591 215146 addons.go:69] Setting dashboard=true in profile "old-k8s-version-743648"
I1028 17:50:28.124612 215146 addons.go:234] Setting addon dashboard=true in "old-k8s-version-743648"
W1028 17:50:28.124623 215146 addons.go:243] addon dashboard should already be in state true
I1028 17:50:28.124661 215146 host.go:66] Checking if "old-k8s-version-743648" exists ...
I1028 17:50:28.125164 215146 cli_runner.go:164] Run: docker container inspect old-k8s-version-743648 --format={{.State.Status}}
I1028 17:50:28.126056 215146 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-743648"
I1028 17:50:28.126083 215146 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-743648"
W1028 17:50:28.126091 215146 addons.go:243] addon metrics-server should already be in state true
I1028 17:50:28.126170 215146 host.go:66] Checking if "old-k8s-version-743648" exists ...
I1028 17:50:28.126628 215146 cli_runner.go:164] Run: docker container inspect old-k8s-version-743648 --format={{.State.Status}}
I1028 17:50:28.137406 215146 out.go:177] * Verifying Kubernetes components...
I1028 17:50:28.139953 215146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 17:50:28.187914 215146 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1028 17:50:28.190730 215146 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1028 17:50:28.192611 215146 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-743648"
W1028 17:50:28.192632 215146 addons.go:243] addon default-storageclass should already be in state true
I1028 17:50:28.192659 215146 host.go:66] Checking if "old-k8s-version-743648" exists ...
I1028 17:50:28.193059 215146 cli_runner.go:164] Run: docker container inspect old-k8s-version-743648 --format={{.State.Status}}
I1028 17:50:28.199067 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1028 17:50:28.199143 215146 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1028 17:50:28.199244 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:28.204230 215146 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1028 17:50:28.206757 215146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:50:28.206780 215146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1028 17:50:28.206848 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:28.219997 215146 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1028 17:50:28.222372 215146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1028 17:50:28.222399 215146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1028 17:50:28.222479 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:28.252463 215146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1028 17:50:28.252487 215146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1028 17:50:28.252582 215146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-743648
I1028 17:50:28.272824 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:28.296878 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:28.314411 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:28.316947 215146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/old-k8s-version-743648/id_rsa Username:docker}
I1028 17:50:28.401918 215146 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1028 17:50:28.418273 215146 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-743648" to be "Ready" ...
I1028 17:50:28.456199 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1028 17:50:28.456294 215146 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1028 17:50:28.476010 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:50:28.490282 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1028 17:50:28.490310 215146 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1028 17:50:28.508219 215146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1028 17:50:28.508243 215146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1028 17:50:28.536746 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1028 17:50:28.536785 215146 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1028 17:50:28.557390 215146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1028 17:50:28.557418 215146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1028 17:50:28.588510 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1028 17:50:28.588537 215146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1028 17:50:28.609714 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1028 17:50:28.621226 215146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1028 17:50:28.621256 215146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1028 17:50:28.629204 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1028 17:50:28.629250 215146 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1028 17:50:28.685481 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 17:50:28.713024 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:28.713062 215146 retry.go:31] will retry after 249.641032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:28.741295 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1028 17:50:28.741328 215146 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1028 17:50:28.830152 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1028 17:50:28.830199 215146 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
W1028 17:50:28.886072 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:28.886147 215146 retry.go:31] will retry after 157.628194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:28.891784 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1028 17:50:28.891813 215146 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W1028 17:50:28.904689 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:28.904763 215146 retry.go:31] will retry after 300.814665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:28.917285 215146 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1028 17:50:28.917360 215146 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1028 17:50:28.947169 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1028 17:50:28.963868 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:50:29.044662 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1028 17:50:29.115101 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.115132 215146 retry.go:31] will retry after 345.572336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 17:50:29.137402 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.137436 215146 retry.go:31] will retry after 367.037177ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 17:50:29.197950 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.197984 215146 retry.go:31] will retry after 325.458493ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.206201 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 17:50:29.292946 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.292977 215146 retry.go:31] will retry after 234.008781ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.461927 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1028 17:50:29.505389 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:50:29.523707 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 17:50:29.528060 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 17:50:29.587711 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.587745 215146 retry.go:31] will retry after 548.105078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 17:50:29.836531 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.836573 215146 retry.go:31] will retry after 367.821073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 17:50:29.836619 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.836632 215146 retry.go:31] will retry after 333.119268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 17:50:29.836668 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:29.836690 215146 retry.go:31] will retry after 828.737205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:30.136915 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1028 17:50:30.170416 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 17:50:30.205393 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1028 17:50:30.336214 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:30.336296 215146 retry.go:31] will retry after 639.459882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 17:50:30.360681 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:30.360762 215146 retry.go:31] will retry after 1.190321719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 17:50:30.412425 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:30.412502 215146 retry.go:31] will retry after 1.012067697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:30.419100 215146 node_ready.go:53] error getting node "old-k8s-version-743648": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-743648": dial tcp 192.168.76.2:8443: connect: connection refused
I1028 17:50:30.665613 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 17:50:30.781004 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:30.781083 215146 retry.go:31] will retry after 867.401968ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:30.976846 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 17:50:31.068059 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:31.068143 215146 retry.go:31] will retry after 1.092810424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:31.425025 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1028 17:50:31.522227 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:31.522311 215146 retry.go:31] will retry after 1.275349181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:31.551373 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1028 17:50:31.645069 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:31.645103 215146 retry.go:31] will retry after 709.215568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:31.649248 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 17:50:31.726052 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:31.726137 215146 retry.go:31] will retry after 1.661237295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:32.161144 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 17:50:32.247356 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:32.247387 215146 retry.go:31] will retry after 1.31242894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:32.354533 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 17:50:32.419190 215146 node_ready.go:53] error getting node "old-k8s-version-743648": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-743648": dial tcp 192.168.76.2:8443: connect: connection refused
W1028 17:50:32.452586 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:32.452644 215146 retry.go:31] will retry after 2.038100899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:32.797995 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1028 17:50:32.912184 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:32.912222 215146 retry.go:31] will retry after 1.885176329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:33.388083 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 17:50:33.482581 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:33.482643 215146 retry.go:31] will retry after 1.530141348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:33.560967 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 17:50:33.649774 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:33.649821 215146 retry.go:31] will retry after 2.539342723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:34.490985 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1028 17:50:34.632178 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:34.632225 215146 retry.go:31] will retry after 1.855325231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:34.798452 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:50:34.920943 215146 node_ready.go:53] error getting node "old-k8s-version-743648": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-743648": dial tcp 192.168.76.2:8443: connect: connection refused
W1028 17:50:34.962865 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:34.962901 215146 retry.go:31] will retry after 4.162380121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:35.013422 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 17:50:35.269762 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:35.269799 215146 retry.go:31] will retry after 1.625161206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:36.189781 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 17:50:36.329959 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:36.329994 215146 retry.go:31] will retry after 2.176317677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:36.488671 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1028 17:50:36.584628 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:36.584657 215146 retry.go:31] will retry after 2.387194881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:36.895110 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 17:50:37.243890 215146 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:37.243921 215146 retry.go:31] will retry after 5.415643272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 17:50:37.419564 215146 node_ready.go:53] error getting node "old-k8s-version-743648": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-743648": dial tcp 192.168.76.2:8443: connect: connection refused
I1028 17:50:38.506668 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1028 17:50:38.972305 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 17:50:39.125675 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:50:42.659692 215146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1028 17:50:47.920762 215146 node_ready.go:53] error getting node "old-k8s-version-743648": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-743648": net/http: TLS handshake timeout
I1028 17:50:48.831124 215146 node_ready.go:49] node "old-k8s-version-743648" has status "Ready":"True"
I1028 17:50:48.831148 215146 node_ready.go:38] duration metric: took 20.412804956s for node "old-k8s-version-743648" to be "Ready" ...
I1028 17:50:48.831158 215146 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1028 17:50:49.092405 215146 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-97b6q" in "kube-system" namespace to be "Ready" ...
I1028 17:50:49.318060 215146 pod_ready.go:93] pod "coredns-74ff55c5b-97b6q" in "kube-system" namespace has status "Ready":"True"
I1028 17:50:49.318141 215146 pod_ready.go:82] duration metric: took 225.651749ms for pod "coredns-74ff55c5b-97b6q" in "kube-system" namespace to be "Ready" ...
I1028 17:50:49.318167 215146 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:50:49.350075 215146 pod_ready.go:98] node "old-k8s-version-743648" hosting pod "etcd-old-k8s-version-743648" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-743648" has status "Ready":"False"
I1028 17:50:49.350150 215146 pod_ready.go:82] duration metric: took 31.951746ms for pod "etcd-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
E1028 17:50:49.350176 215146 pod_ready.go:67] WaitExtra: waitPodCondition: node "old-k8s-version-743648" hosting pod "etcd-old-k8s-version-743648" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-743648" has status "Ready":"False"
I1028 17:50:49.350215 215146 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:50:51.359742 215146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.853025386s)
I1028 17:50:51.359959 215146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (12.387626569s)
I1028 17:50:51.360100 215146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.234385234s)
I1028 17:50:51.361965 215146 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-743648 addons enable metrics-server
I1028 17:50:51.517262 215146 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:50:51.597447 215146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.937713322s)
I1028 17:50:51.597481 215146 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-743648"
I1028 17:50:51.600514 215146 out.go:177] * Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
I1028 17:50:51.602561 215146 addons.go:510] duration metric: took 23.47935473s for enable addons: enabled=[storage-provisioner dashboard default-storageclass metrics-server]
I1028 17:50:53.857965 215146 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:50:56.357489 215146 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:50:58.358957 215146 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:00.364808 215146 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"True"
I1028 17:51:00.364901 215146 pod_ready.go:82] duration metric: took 11.014611858s for pod "kube-apiserver-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:51:00.364932 215146 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:51:02.372272 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:04.873398 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:07.371522 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:09.372719 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:11.380138 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:13.874201 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:16.373263 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:18.872080 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:21.371191 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:23.371478 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:25.871341 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:27.873185 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:30.371410 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:32.871590 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:34.872705 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:37.371451 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:39.371516 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:41.871268 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:43.872531 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:46.373840 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:48.871980 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:51.371966 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:53.871867 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:56.371855 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:58.372925 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:00.382717 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:01.374686 215146 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:01.374713 215146 pod_ready.go:82] duration metric: took 1m1.009759232s for pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:52:01.374725 215146 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zzhjw" in "kube-system" namespace to be "Ready" ...
I1028 17:52:01.380398 215146 pod_ready.go:93] pod "kube-proxy-zzhjw" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:01.380426 215146 pod_ready.go:82] duration metric: took 5.693725ms for pod "kube-proxy-zzhjw" in "kube-system" namespace to be "Ready" ...
I1028 17:52:01.380438 215146 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:52:03.386921 215146 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:05.387165 215146 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:07.887839 215146 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:09.396099 215146 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:09.396169 215146 pod_ready.go:82] duration metric: took 8.015721675s for pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:52:09.396196 215146 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace to be "Ready" ...
I1028 17:52:11.403439 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:13.903256 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:15.905755 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:18.402603 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:20.409296 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:22.903578 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:25.403212 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:27.903456 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:30.402618 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:32.903183 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:35.402894 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:37.402921 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:39.407728 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:41.902217 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:43.902841 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:46.402949 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:48.403172 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:50.902528 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:52.904605 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:55.401888 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:57.404216 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:59.901848 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:01.902156 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:03.903376 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:06.404303 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:08.904591 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:11.402296 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:13.903797 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:16.402739 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:18.903188 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:21.401986 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:23.402635 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:25.902538 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:27.902722 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:29.903115 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:31.903725 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:34.403587 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:36.403772 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:38.903738 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:41.401810 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:43.404321 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:45.404851 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:47.902832 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:49.903288 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:52.403120 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:54.902750 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:56.902993 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:59.403935 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:01.405887 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:03.902918 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:06.402740 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:08.901955 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:10.903113 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:13.404332 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:15.901937 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:17.903008 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:19.903497 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:22.402932 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:24.903029 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:27.402751 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:29.408164 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:31.902254 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:33.902654 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:36.403234 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:38.902989 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:41.403337 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:43.904604 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:45.911071 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:48.402130 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:50.402970 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:52.904597 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:55.402156 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:57.402401 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:59.409121 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:01.902400 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:03.902455 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:05.902611 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:07.903035 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:10.403203 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:12.902646 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:14.904525 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:17.403120 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:19.403710 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:21.902564 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:24.403100 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:26.902735 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:28.902857 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:31.402331 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:33.402846 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:35.902772 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:37.908704 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:40.402250 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:42.403352 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:44.902191 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:46.902393 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:48.902433 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:51.402737 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:53.903135 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:55.904316 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:58.403625 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:00.427009 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:02.902449 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:04.903621 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:06.904653 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:09.402306 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:09.402337 215146 pod_ready.go:82] duration metric: took 4m0.00611291s for pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace to be "Ready" ...
E1028 17:56:09.402348 215146 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1028 17:56:09.402356 215146 pod_ready.go:39] duration metric: took 5m20.571187986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1028 17:56:09.402371 215146 api_server.go:52] waiting for apiserver process to appear ...
I1028 17:56:09.402405 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 17:56:09.402471 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 17:56:09.448538 215146 cri.go:89] found id: "be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6"
I1028 17:56:09.448610 215146 cri.go:89] found id: "c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:56:09.448617 215146 cri.go:89] found id: ""
I1028 17:56:09.448624 215146 logs.go:282] 2 containers: [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6 c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2]
I1028 17:56:09.448679 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.453245 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.456632 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 17:56:09.456702 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 17:56:09.501220 215146 cri.go:89] found id: "ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80"
I1028 17:56:09.501288 215146 cri.go:89] found id: "48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:56:09.501299 215146 cri.go:89] found id: ""
I1028 17:56:09.501307 215146 logs.go:282] 2 containers: [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80 48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420]
I1028 17:56:09.501362 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.505290 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.508437 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 17:56:09.508530 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 17:56:09.550712 215146 cri.go:89] found id: "0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522"
I1028 17:56:09.550740 215146 cri.go:89] found id: "697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:56:09.550748 215146 cri.go:89] found id: ""
I1028 17:56:09.550756 215146 logs.go:282] 2 containers: [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522 697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e]
I1028 17:56:09.550814 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.554604 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.558130 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 17:56:09.558217 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 17:56:09.596423 215146 cri.go:89] found id: "0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca"
I1028 17:56:09.596441 215146 cri.go:89] found id: "e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:56:09.596446 215146 cri.go:89] found id: ""
I1028 17:56:09.596453 215146 logs.go:282] 2 containers: [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98]
I1028 17:56:09.596506 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.603505 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.607560 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 17:56:09.607627 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 17:56:09.644117 215146 cri.go:89] found id: "251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b"
I1028 17:56:09.644141 215146 cri.go:89] found id: "f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:56:09.644146 215146 cri.go:89] found id: ""
I1028 17:56:09.644153 215146 logs.go:282] 2 containers: [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1]
I1028 17:56:09.644207 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.647803 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.651052 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 17:56:09.651119 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 17:56:09.691655 215146 cri.go:89] found id: "45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6"
I1028 17:56:09.691683 215146 cri.go:89] found id: "07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:56:09.691687 215146 cri.go:89] found id: ""
I1028 17:56:09.691694 215146 logs.go:282] 2 containers: [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6 07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45]
I1028 17:56:09.691755 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.695508 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.699134 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 17:56:09.699244 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 17:56:09.738913 215146 cri.go:89] found id: "281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572"
I1028 17:56:09.738939 215146 cri.go:89] found id: "eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:56:09.738944 215146 cri.go:89] found id: ""
I1028 17:56:09.738951 215146 logs.go:282] 2 containers: [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572 eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d]
I1028 17:56:09.739019 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.742717 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.746117 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 17:56:09.746199 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 17:56:09.784046 215146 cri.go:89] found id: "616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245"
I1028 17:56:09.784070 215146 cri.go:89] found id: ""
I1028 17:56:09.784078 215146 logs.go:282] 1 containers: [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245]
I1028 17:56:09.784132 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.787696 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 17:56:09.787759 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 17:56:09.835450 215146 cri.go:89] found id: "1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220"
I1028 17:56:09.835472 215146 cri.go:89] found id: "7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78"
I1028 17:56:09.835477 215146 cri.go:89] found id: ""
I1028 17:56:09.835484 215146 logs.go:282] 2 containers: [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220 7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78]
I1028 17:56:09.835550 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.839920 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.844047 215146 logs.go:123] Gathering logs for coredns [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522] ...
I1028 17:56:09.844069 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522"
I1028 17:56:09.887075 215146 logs.go:123] Gathering logs for kube-proxy [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b] ...
I1028 17:56:09.887100 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b"
I1028 17:56:09.930757 215146 logs.go:123] Gathering logs for kube-controller-manager [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6] ...
I1028 17:56:09.930786 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6"
I1028 17:56:09.989516 215146 logs.go:123] Gathering logs for kube-controller-manager [07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45] ...
I1028 17:56:09.989550 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:56:10.075876 215146 logs.go:123] Gathering logs for storage-provisioner [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220] ...
I1028 17:56:10.075913 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220"
I1028 17:56:10.131752 215146 logs.go:123] Gathering logs for container status ...
I1028 17:56:10.131782 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 17:56:10.189774 215146 logs.go:123] Gathering logs for kubelet ...
I1028 17:56:10.189811 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1028 17:56:10.260140 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.823989 666 reflector.go:138] object-"kube-system"/"metrics-server-token-s8546": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-s8546" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.260375 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.825360 666 reflector.go:138] object-"kube-system"/"kindnet-token-rzkm7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rzkm7" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.260691 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.826686 666 reflector.go:138] object-"kube-system"/"coredns-token-bhrx5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-bhrx5" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.260895 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.827119 666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.261159 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.827457 666 reflector.go:138] object-"kube-system"/"kube-proxy-token-j55lg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j55lg" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.263578 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862535 666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-42rw4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-42rw4" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.263789 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862690 666 reflector.go:138] object-"default"/"default-token-8g42r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8g42r" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.265241 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862027 666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.273845 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:52 old-k8s-version-743648 kubelet[666]: E1028 17:50:52.034873 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.274042 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:52 old-k8s-version-743648 kubelet[666]: E1028 17:50:52.426336 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.276900 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:04 old-k8s-version-743648 kubelet[666]: E1028 17:51:04.181734 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.278592 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:15 old-k8s-version-743648 kubelet[666]: E1028 17:51:15.145992 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.279523 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:20 old-k8s-version-743648 kubelet[666]: E1028 17:51:20.589441 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.279852 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:21 old-k8s-version-743648 kubelet[666]: E1028 17:51:21.593639 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.280289 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:23 old-k8s-version-743648 kubelet[666]: E1028 17:51:23.609906 666 pod_workers.go:191] Error syncing pod 8ffc3abd-c784-474f-80a5-f6a8b25abc51 ("storage-provisioner_kube-system(8ffc3abd-c784-474f-80a5-f6a8b25abc51)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8ffc3abd-c784-474f-80a5-f6a8b25abc51)"
W1028 17:56:10.280628 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:25 old-k8s-version-743648 kubelet[666]: E1028 17:51:25.775266 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.283444 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:29 old-k8s-version-743648 kubelet[666]: E1028 17:51:29.147486 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.283892 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:41 old-k8s-version-743648 kubelet[666]: E1028 17:51:41.138852 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.284352 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:41 old-k8s-version-743648 kubelet[666]: E1028 17:51:41.660884 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.284688 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:45 old-k8s-version-743648 kubelet[666]: E1028 17:51:45.774632 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.284874 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:52 old-k8s-version-743648 kubelet[666]: E1028 17:51:52.138242 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.285204 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:59 old-k8s-version-743648 kubelet[666]: E1028 17:51:59.137591 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.285395 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:05 old-k8s-version-743648 kubelet[666]: E1028 17:52:05.138208 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.285982 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:14 old-k8s-version-743648 kubelet[666]: E1028 17:52:14.756224 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.286310 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:15 old-k8s-version-743648 kubelet[666]: E1028 17:52:15.774720 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.288793 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:17 old-k8s-version-743648 kubelet[666]: E1028 17:52:17.148693 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.289126 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:28 old-k8s-version-743648 kubelet[666]: E1028 17:52:28.141800 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.289309 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:32 old-k8s-version-743648 kubelet[666]: E1028 17:52:32.138343 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.289637 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:40 old-k8s-version-743648 kubelet[666]: E1028 17:52:40.138205 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.289820 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:44 old-k8s-version-743648 kubelet[666]: E1028 17:52:44.140775 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.290404 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:55 old-k8s-version-743648 kubelet[666]: E1028 17:52:55.867579 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.290588 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:58 old-k8s-version-743648 kubelet[666]: E1028 17:52:58.138213 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.290916 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:05 old-k8s-version-743648 kubelet[666]: E1028 17:53:05.775406 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.291101 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:13 old-k8s-version-743648 kubelet[666]: E1028 17:53:13.138098 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.291427 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:17 old-k8s-version-743648 kubelet[666]: E1028 17:53:17.137660 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.291608 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:25 old-k8s-version-743648 kubelet[666]: E1028 17:53:25.138080 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.291950 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:28 old-k8s-version-743648 kubelet[666]: E1028 17:53:28.141572 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.294378 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:39 old-k8s-version-743648 kubelet[666]: E1028 17:53:39.147007 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.294718 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:43 old-k8s-version-743648 kubelet[666]: E1028 17:53:43.137608 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.294904 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:52 old-k8s-version-743648 kubelet[666]: E1028 17:53:52.138963 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.295229 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:57 old-k8s-version-743648 kubelet[666]: E1028 17:53:57.137666 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.295414 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:03 old-k8s-version-743648 kubelet[666]: E1028 17:54:03.139810 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.295738 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:08 old-k8s-version-743648 kubelet[666]: E1028 17:54:08.137748 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.295926 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:15 old-k8s-version-743648 kubelet[666]: E1028 17:54:15.138210 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.296514 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:22 old-k8s-version-743648 kubelet[666]: E1028 17:54:22.110858 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.296852 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:25 old-k8s-version-743648 kubelet[666]: E1028 17:54:25.774510 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.297038 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:26 old-k8s-version-743648 kubelet[666]: E1028 17:54:26.142477 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.297362 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:36 old-k8s-version-743648 kubelet[666]: E1028 17:54:36.140263 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.297545 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:40 old-k8s-version-743648 kubelet[666]: E1028 17:54:40.138696 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.297872 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:51 old-k8s-version-743648 kubelet[666]: E1028 17:54:51.137643 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.298055 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:54 old-k8s-version-743648 kubelet[666]: E1028 17:54:54.140745 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.298386 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:05 old-k8s-version-743648 kubelet[666]: E1028 17:55:05.137675 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.298570 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:06 old-k8s-version-743648 kubelet[666]: E1028 17:55:06.141555 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.298895 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.142304 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.299079 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.144881 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.299262 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.145457 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.299586 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.148252 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.299910 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.143236 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.300093 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.300421 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.300632 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1028 17:56:10.300643 215146 logs.go:123] Gathering logs for describe nodes ...
I1028 17:56:10.300657 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 17:56:10.461900 215146 logs.go:123] Gathering logs for etcd [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80] ...
I1028 17:56:10.461929 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80"
I1028 17:56:10.543636 215146 logs.go:123] Gathering logs for etcd [48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420] ...
I1028 17:56:10.543667 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:56:10.596502 215146 logs.go:123] Gathering logs for kube-scheduler [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca] ...
I1028 17:56:10.596534 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca"
I1028 17:56:10.668010 215146 logs.go:123] Gathering logs for kube-proxy [f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1] ...
I1028 17:56:10.668084 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:56:10.718076 215146 logs.go:123] Gathering logs for containerd ...
I1028 17:56:10.718154 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 17:56:10.799093 215146 logs.go:123] Gathering logs for dmesg ...
I1028 17:56:10.799138 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 17:56:10.817662 215146 logs.go:123] Gathering logs for kube-apiserver [c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2] ...
I1028 17:56:10.817695 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:56:10.898218 215146 logs.go:123] Gathering logs for kindnet [eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d] ...
I1028 17:56:10.898249 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:56:10.949120 215146 logs.go:123] Gathering logs for kubernetes-dashboard [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245] ...
I1028 17:56:10.949162 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245"
I1028 17:56:11.003335 215146 logs.go:123] Gathering logs for kube-apiserver [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6] ...
I1028 17:56:11.003364 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6"
I1028 17:56:11.097068 215146 logs.go:123] Gathering logs for kindnet [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572] ...
I1028 17:56:11.097153 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572"
I1028 17:56:11.199799 215146 logs.go:123] Gathering logs for storage-provisioner [7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78] ...
I1028 17:56:11.199905 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78"
I1028 17:56:11.294982 215146 logs.go:123] Gathering logs for coredns [697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e] ...
I1028 17:56:11.295052 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:56:11.350232 215146 logs.go:123] Gathering logs for kube-scheduler [e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98] ...
I1028 17:56:11.350305 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:56:11.409560 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:56:11.409639 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1028 17:56:11.409735 215146 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1028 17:56:11.409781 215146 out.go:270] Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.148252 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.148252 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:11.409949 215146 out.go:270] Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.143236 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.143236 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:11.409983 215146 out.go:270] Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:11.410035 215146 out.go:270] Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:11.410082 215146 out.go:270] Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1028 17:56:11.410132 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:56:11.410153 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:56:21.411726 215146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1028 17:56:21.424434 215146 api_server.go:72] duration metric: took 5m53.301552069s to wait for apiserver process to appear ...
I1028 17:56:21.424458 215146 api_server.go:88] waiting for apiserver healthz status ...
I1028 17:56:21.424495 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 17:56:21.424586 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 17:56:21.462686 215146 cri.go:89] found id: "be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6"
I1028 17:56:21.462708 215146 cri.go:89] found id: "c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:56:21.462713 215146 cri.go:89] found id: ""
I1028 17:56:21.462721 215146 logs.go:282] 2 containers: [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6 c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2]
I1028 17:56:21.462783 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.466844 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.477116 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 17:56:21.477193 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 17:56:21.528404 215146 cri.go:89] found id: "ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80"
I1028 17:56:21.528424 215146 cri.go:89] found id: "48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:56:21.528429 215146 cri.go:89] found id: ""
I1028 17:56:21.528436 215146 logs.go:282] 2 containers: [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80 48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420]
I1028 17:56:21.528490 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.532483 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.536429 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 17:56:21.536503 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 17:56:21.577695 215146 cri.go:89] found id: "0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522"
I1028 17:56:21.577724 215146 cri.go:89] found id: "697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:56:21.577729 215146 cri.go:89] found id: ""
I1028 17:56:21.577737 215146 logs.go:282] 2 containers: [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522 697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e]
I1028 17:56:21.577814 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.581875 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.585396 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 17:56:21.585469 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 17:56:21.626201 215146 cri.go:89] found id: "0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca"
I1028 17:56:21.626225 215146 cri.go:89] found id: "e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:56:21.626230 215146 cri.go:89] found id: ""
I1028 17:56:21.626237 215146 logs.go:282] 2 containers: [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98]
I1028 17:56:21.626295 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.629990 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.633594 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 17:56:21.633663 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 17:56:21.676659 215146 cri.go:89] found id: "251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b"
I1028 17:56:21.676682 215146 cri.go:89] found id: "f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:56:21.676687 215146 cri.go:89] found id: ""
I1028 17:56:21.676694 215146 logs.go:282] 2 containers: [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1]
I1028 17:56:21.676753 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.681207 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.684753 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 17:56:21.684826 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 17:56:21.758221 215146 cri.go:89] found id: "45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6"
I1028 17:56:21.758242 215146 cri.go:89] found id: "07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:56:21.758247 215146 cri.go:89] found id: ""
I1028 17:56:21.758254 215146 logs.go:282] 2 containers: [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6 07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45]
I1028 17:56:21.758309 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.762388 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.765992 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 17:56:21.766050 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 17:56:21.815014 215146 cri.go:89] found id: "281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572"
I1028 17:56:21.815040 215146 cri.go:89] found id: "eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:56:21.815045 215146 cri.go:89] found id: ""
I1028 17:56:21.815052 215146 logs.go:282] 2 containers: [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572 eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d]
I1028 17:56:21.815108 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.818748 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.822240 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 17:56:21.822340 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 17:56:21.865838 215146 cri.go:89] found id: "616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245"
I1028 17:56:21.865901 215146 cri.go:89] found id: ""
I1028 17:56:21.865916 215146 logs.go:282] 1 containers: [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245]
I1028 17:56:21.865971 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.870830 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 17:56:21.870949 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 17:56:21.910658 215146 cri.go:89] found id: "1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220"
I1028 17:56:21.910685 215146 cri.go:89] found id: "7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78"
I1028 17:56:21.910689 215146 cri.go:89] found id: ""
I1028 17:56:21.910699 215146 logs.go:282] 2 containers: [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220 7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78]
I1028 17:56:21.910774 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.914374 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.917740 215146 logs.go:123] Gathering logs for kube-scheduler [e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98] ...
I1028 17:56:21.917766 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:56:21.958055 215146 logs.go:123] Gathering logs for kubernetes-dashboard [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245] ...
I1028 17:56:21.958089 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245"
I1028 17:56:22.000135 215146 logs.go:123] Gathering logs for storage-provisioner [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220] ...
I1028 17:56:22.000162 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220"
I1028 17:56:22.043187 215146 logs.go:123] Gathering logs for kubelet ...
I1028 17:56:22.043217 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1028 17:56:22.097360 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.823989 666 reflector.go:138] object-"kube-system"/"metrics-server-token-s8546": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-s8546" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.097593 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.825360 666 reflector.go:138] object-"kube-system"/"kindnet-token-rzkm7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rzkm7" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.097805 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.826686 666 reflector.go:138] object-"kube-system"/"coredns-token-bhrx5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-bhrx5" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.098006 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.827119 666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.098222 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.827457 666 reflector.go:138] object-"kube-system"/"kube-proxy-token-j55lg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j55lg" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.100682 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862535 666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-42rw4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-42rw4" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.100895 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862690 666 reflector.go:138] object-"default"/"default-token-8g42r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8g42r" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.102299 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862027 666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.110663 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:52 old-k8s-version-743648 kubelet[666]: E1028 17:50:52.034873 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.110853 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:52 old-k8s-version-743648 kubelet[666]: E1028 17:50:52.426336 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.113643 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:04 old-k8s-version-743648 kubelet[666]: E1028 17:51:04.181734 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.115310 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:15 old-k8s-version-743648 kubelet[666]: E1028 17:51:15.145992 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.116227 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:20 old-k8s-version-743648 kubelet[666]: E1028 17:51:20.589441 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.116583 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:21 old-k8s-version-743648 kubelet[666]: E1028 17:51:21.593639 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.117028 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:23 old-k8s-version-743648 kubelet[666]: E1028 17:51:23.609906 666 pod_workers.go:191] Error syncing pod 8ffc3abd-c784-474f-80a5-f6a8b25abc51 ("storage-provisioner_kube-system(8ffc3abd-c784-474f-80a5-f6a8b25abc51)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8ffc3abd-c784-474f-80a5-f6a8b25abc51)"
W1028 17:56:22.117353 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:25 old-k8s-version-743648 kubelet[666]: E1028 17:51:25.775266 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.120160 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:29 old-k8s-version-743648 kubelet[666]: E1028 17:51:29.147486 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.120698 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:41 old-k8s-version-743648 kubelet[666]: E1028 17:51:41.138852 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.121161 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:41 old-k8s-version-743648 kubelet[666]: E1028 17:51:41.660884 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.121488 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:45 old-k8s-version-743648 kubelet[666]: E1028 17:51:45.774632 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.121670 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:52 old-k8s-version-743648 kubelet[666]: E1028 17:51:52.138242 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.121998 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:59 old-k8s-version-743648 kubelet[666]: E1028 17:51:59.137591 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.122182 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:05 old-k8s-version-743648 kubelet[666]: E1028 17:52:05.138208 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.122765 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:14 old-k8s-version-743648 kubelet[666]: E1028 17:52:14.756224 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.123092 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:15 old-k8s-version-743648 kubelet[666]: E1028 17:52:15.774720 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.125572 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:17 old-k8s-version-743648 kubelet[666]: E1028 17:52:17.148693 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.125906 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:28 old-k8s-version-743648 kubelet[666]: E1028 17:52:28.141800 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.126094 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:32 old-k8s-version-743648 kubelet[666]: E1028 17:52:32.138343 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.126423 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:40 old-k8s-version-743648 kubelet[666]: E1028 17:52:40.138205 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.126605 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:44 old-k8s-version-743648 kubelet[666]: E1028 17:52:44.140775 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.127190 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:55 old-k8s-version-743648 kubelet[666]: E1028 17:52:55.867579 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.127374 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:58 old-k8s-version-743648 kubelet[666]: E1028 17:52:58.138213 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.127698 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:05 old-k8s-version-743648 kubelet[666]: E1028 17:53:05.775406 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.127885 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:13 old-k8s-version-743648 kubelet[666]: E1028 17:53:13.138098 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.128213 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:17 old-k8s-version-743648 kubelet[666]: E1028 17:53:17.137660 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.128398 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:25 old-k8s-version-743648 kubelet[666]: E1028 17:53:25.138080 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.128741 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:28 old-k8s-version-743648 kubelet[666]: E1028 17:53:28.141572 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.131182 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:39 old-k8s-version-743648 kubelet[666]: E1028 17:53:39.147007 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.131510 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:43 old-k8s-version-743648 kubelet[666]: E1028 17:53:43.137608 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.131694 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:52 old-k8s-version-743648 kubelet[666]: E1028 17:53:52.138963 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.132018 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:57 old-k8s-version-743648 kubelet[666]: E1028 17:53:57.137666 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.132200 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:03 old-k8s-version-743648 kubelet[666]: E1028 17:54:03.139810 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.132523 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:08 old-k8s-version-743648 kubelet[666]: E1028 17:54:08.137748 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.132714 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:15 old-k8s-version-743648 kubelet[666]: E1028 17:54:15.138210 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.133317 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:22 old-k8s-version-743648 kubelet[666]: E1028 17:54:22.110858 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.133641 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:25 old-k8s-version-743648 kubelet[666]: E1028 17:54:25.774510 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.133824 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:26 old-k8s-version-743648 kubelet[666]: E1028 17:54:26.142477 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.134153 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:36 old-k8s-version-743648 kubelet[666]: E1028 17:54:36.140263 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.134335 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:40 old-k8s-version-743648 kubelet[666]: E1028 17:54:40.138696 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.134661 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:51 old-k8s-version-743648 kubelet[666]: E1028 17:54:51.137643 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.134843 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:54 old-k8s-version-743648 kubelet[666]: E1028 17:54:54.140745 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.135171 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:05 old-k8s-version-743648 kubelet[666]: E1028 17:55:05.137675 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.135355 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:06 old-k8s-version-743648 kubelet[666]: E1028 17:55:06.141555 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.135679 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.142304 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.135860 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.144881 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.136044 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.145457 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.136368 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.148252 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.136701 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.143236 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.136883 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.137231 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.137419 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.137747 215146 logs.go:138] Found kubelet problem: Oct 28 17:56:11 old-k8s-version-743648 kubelet[666]: E1028 17:56:11.140245 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.137930 215146 logs.go:138] Found kubelet problem: Oct 28 17:56:13 old-k8s-version-743648 kubelet[666]: E1028 17:56:13.138081 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1028 17:56:22.137946 215146 logs.go:123] Gathering logs for dmesg ...
I1028 17:56:22.137960 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 17:56:22.158369 215146 logs.go:123] Gathering logs for kube-apiserver [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6] ...
I1028 17:56:22.158447 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6"
I1028 17:56:22.213784 215146 logs.go:123] Gathering logs for kube-apiserver [c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2] ...
I1028 17:56:22.213817 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:56:22.279566 215146 logs.go:123] Gathering logs for coredns [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522] ...
I1028 17:56:22.279609 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522"
I1028 17:56:22.320096 215146 logs.go:123] Gathering logs for storage-provisioner [7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78] ...
I1028 17:56:22.320123 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78"
I1028 17:56:22.366694 215146 logs.go:123] Gathering logs for kindnet [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572] ...
I1028 17:56:22.366732 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572"
I1028 17:56:22.417133 215146 logs.go:123] Gathering logs for describe nodes ...
I1028 17:56:22.417163 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 17:56:22.575425 215146 logs.go:123] Gathering logs for coredns [697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e] ...
I1028 17:56:22.575457 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:56:22.613582 215146 logs.go:123] Gathering logs for kube-proxy [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b] ...
I1028 17:56:22.613610 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b"
I1028 17:56:22.660854 215146 logs.go:123] Gathering logs for kube-controller-manager [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6] ...
I1028 17:56:22.660882 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6"
I1028 17:56:22.717997 215146 logs.go:123] Gathering logs for kube-controller-manager [07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45] ...
I1028 17:56:22.718031 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:56:22.775654 215146 logs.go:123] Gathering logs for etcd [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80] ...
I1028 17:56:22.775689 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80"
I1028 17:56:22.829841 215146 logs.go:123] Gathering logs for etcd [48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420] ...
I1028 17:56:22.829880 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:56:22.873201 215146 logs.go:123] Gathering logs for kube-proxy [f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1] ...
I1028 17:56:22.873231 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:56:22.917373 215146 logs.go:123] Gathering logs for container status ...
I1028 17:56:22.917399 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 17:56:22.966858 215146 logs.go:123] Gathering logs for kube-scheduler [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca] ...
I1028 17:56:22.966891 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca"
I1028 17:56:23.010694 215146 logs.go:123] Gathering logs for kindnet [eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d] ...
I1028 17:56:23.010731 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:56:23.069106 215146 logs.go:123] Gathering logs for containerd ...
I1028 17:56:23.069137 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 17:56:23.132899 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:56:23.132932 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1028 17:56:23.133001 215146 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1028 17:56:23.133017 215146 out.go:270] Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:23.133051 215146 out.go:270] Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:23.133060 215146 out.go:270] Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:23.133067 215146 out.go:270] Oct 28 17:56:11 old-k8s-version-743648 kubelet[666]: E1028 17:56:11.140245 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:56:11 old-k8s-version-743648 kubelet[666]: E1028 17:56:11.140245 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:23.133073 215146 out.go:270] Oct 28 17:56:13 old-k8s-version-743648 kubelet[666]: E1028 17:56:13.138081 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:56:13 old-k8s-version-743648 kubelet[666]: E1028 17:56:13.138081 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1028 17:56:23.133082 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:56:23.133089 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:56:33.134758 215146 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1028 17:56:33.146466 215146 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1028 17:56:33.148907 215146 out.go:201]
W1028 17:56:33.150944 215146 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1028 17:56:33.150988 215146 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W1028 17:56:33.151008 215146 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W1028 17:56:33.151018 215146 out.go:270] *
*
W1028 17:56:33.151915 215146 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 17:56:33.154167 215146 out.go:201]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-743648 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-743648
helpers_test.go:235: (dbg) docker inspect old-k8s-version-743648:
-- stdout --
[
{
"Id": "4c5718840b60fe074564d39fac7140a2cb2150b953d807efe276ee364039a882",
"Created": "2024-10-28T17:47:31.778560195Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 215428,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-10-28T17:50:20.505926588Z",
"FinishedAt": "2024-10-28T17:50:19.38112652Z"
},
"Image": "sha256:e93e681afec646b1183cc5ed9957e6950020eb724f4af8d4e63074eba4425a9d",
"ResolvConfPath": "/var/lib/docker/containers/4c5718840b60fe074564d39fac7140a2cb2150b953d807efe276ee364039a882/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/4c5718840b60fe074564d39fac7140a2cb2150b953d807efe276ee364039a882/hostname",
"HostsPath": "/var/lib/docker/containers/4c5718840b60fe074564d39fac7140a2cb2150b953d807efe276ee364039a882/hosts",
"LogPath": "/var/lib/docker/containers/4c5718840b60fe074564d39fac7140a2cb2150b953d807efe276ee364039a882/4c5718840b60fe074564d39fac7140a2cb2150b953d807efe276ee364039a882-json.log",
"Name": "/old-k8s-version-743648",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"old-k8s-version-743648:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-743648",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/e0d5fcf77ddad13b320c07bdc8bb7444961bbd021972948cb393e5dbd0765bf2-init/diff:/var/lib/docker/overlay2/fe05ec7db64248f1e958bdfc9b9ba43df1fa4038fd82b964b9c44dfc45743ce4/diff",
"MergedDir": "/var/lib/docker/overlay2/e0d5fcf77ddad13b320c07bdc8bb7444961bbd021972948cb393e5dbd0765bf2/merged",
"UpperDir": "/var/lib/docker/overlay2/e0d5fcf77ddad13b320c07bdc8bb7444961bbd021972948cb393e5dbd0765bf2/diff",
"WorkDir": "/var/lib/docker/overlay2/e0d5fcf77ddad13b320c07bdc8bb7444961bbd021972948cb393e5dbd0765bf2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "old-k8s-version-743648",
"Source": "/var/lib/docker/volumes/old-k8s-version-743648/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "old-k8s-version-743648",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-743648",
"name.minikube.sigs.k8s.io": "old-k8s-version-743648",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7ed0f88d2c1806020811663c2b58c0b06d233526db1777f2bc83a2bd31e3458d",
"SandboxKey": "/var/run/docker/netns/7ed0f88d2c18",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33066"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33067"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33070"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33068"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33069"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-743648": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "a10a776bab45ddce4d7b589e01809f782f032b39f7671285087dd9f02ccddd4a",
"EndpointID": "07fb6f0765ba07a0b700671f69ae25049b2fb1416ba68ff0ad9be22bf4de0a84",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-743648",
"4c5718840b60"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-743648 -n old-k8s-version-743648
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-743648 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-743648 logs -n 25: (2.406718184s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-222958 sudo find | cilium-222958 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-222958 sudo crio | cilium-222958 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | |
| | config | | | | | |
| delete | -p cilium-222958 | cilium-222958 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | 28 Oct 24 17:46 UTC |
| start | -p force-systemd-env-151163 | force-systemd-env-151163 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | 28 Oct 24 17:46 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-flag-884615 | force-systemd-flag-884615 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | 28 Oct 24 17:46 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-flag-884615 | force-systemd-flag-884615 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | 28 Oct 24 17:46 UTC |
| start | -p cert-expiration-568092 | cert-expiration-568092 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | 28 Oct 24 17:47 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-151163 | force-systemd-env-151163 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | 28 Oct 24 17:46 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-151163 | force-systemd-env-151163 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | 28 Oct 24 17:46 UTC |
| start | -p cert-options-636992 | cert-options-636992 | jenkins | v1.34.0 | 28 Oct 24 17:46 UTC | 28 Oct 24 17:47 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-636992 ssh | cert-options-636992 | jenkins | v1.34.0 | 28 Oct 24 17:47 UTC | 28 Oct 24 17:47 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-636992 -- sudo | cert-options-636992 | jenkins | v1.34.0 | 28 Oct 24 17:47 UTC | 28 Oct 24 17:47 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-636992 | cert-options-636992 | jenkins | v1.34.0 | 28 Oct 24 17:47 UTC | 28 Oct 24 17:47 UTC |
| start | -p old-k8s-version-743648 | old-k8s-version-743648 | jenkins | v1.34.0 | 28 Oct 24 17:47 UTC | 28 Oct 24 17:49 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-568092 | cert-expiration-568092 | jenkins | v1.34.0 | 28 Oct 24 17:50 UTC | 28 Oct 24 17:50 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| addons | enable metrics-server -p old-k8s-version-743648 | old-k8s-version-743648 | jenkins | v1.34.0 | 28 Oct 24 17:50 UTC | 28 Oct 24 17:50 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-743648 | old-k8s-version-743648 | jenkins | v1.34.0 | 28 Oct 24 17:50 UTC | 28 Oct 24 17:50 UTC |
| | --alsologtostderr -v=3 | | | | | |
| delete | -p cert-expiration-568092 | cert-expiration-568092 | jenkins | v1.34.0 | 28 Oct 24 17:50 UTC | 28 Oct 24 17:50 UTC |
| start | -p no-preload-671620 | no-preload-671620 | jenkins | v1.34.0 | 28 Oct 24 17:50 UTC | 28 Oct 24 17:51 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| addons | enable dashboard -p old-k8s-version-743648 | old-k8s-version-743648 | jenkins | v1.34.0 | 28 Oct 24 17:50 UTC | 28 Oct 24 17:50 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-743648 | old-k8s-version-743648 | jenkins | v1.34.0 | 28 Oct 24 17:50 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-671620 | no-preload-671620 | jenkins | v1.34.0 | 28 Oct 24 17:51 UTC | 28 Oct 24 17:51 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-671620 | no-preload-671620 | jenkins | v1.34.0 | 28 Oct 24 17:51 UTC | 28 Oct 24 17:51 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-671620 | no-preload-671620 | jenkins | v1.34.0 | 28 Oct 24 17:51 UTC | 28 Oct 24 17:51 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-671620 | no-preload-671620 | jenkins | v1.34.0 | 28 Oct 24 17:51 UTC | 28 Oct 24 17:56 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/10/28 17:51:57
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.23.2 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1028 17:51:57.292188 221317 out.go:345] Setting OutFile to fd 1 ...
I1028 17:51:57.292745 221317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:51:57.292800 221317 out.go:358] Setting ErrFile to fd 2...
I1028 17:51:57.292820 221317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:51:57.293381 221317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19872-2421/.minikube/bin
I1028 17:51:57.293944 221317 out.go:352] Setting JSON to false
I1028 17:51:57.295021 221317 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5664,"bootTime":1730132254,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1028 17:51:57.295174 221317 start.go:139] virtualization:
I1028 17:51:57.298076 221317 out.go:177] * [no-preload-671620] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1028 17:51:57.299700 221317 out.go:177] - MINIKUBE_LOCATION=19872
I1028 17:51:57.299787 221317 notify.go:220] Checking for updates...
I1028 17:51:57.304084 221317 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1028 17:51:57.306177 221317 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19872-2421/kubeconfig
I1028 17:51:57.308558 221317 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19872-2421/.minikube
I1028 17:51:57.310536 221317 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1028 17:51:57.312734 221317 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1028 17:51:57.315489 221317 config.go:182] Loaded profile config "no-preload-671620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 17:51:57.316078 221317 driver.go:394] Setting default libvirt URI to qemu:///system
I1028 17:51:57.345443 221317 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1028 17:51:57.345580 221317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1028 17:51:57.401764 221317 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 17:51:57.391582801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1028 17:51:57.401881 221317 docker.go:318] overlay module found
I1028 17:51:57.404451 221317 out.go:177] * Using the docker driver based on existing profile
I1028 17:51:57.406148 221317 start.go:297] selected driver: docker
I1028 17:51:57.406173 221317 start.go:901] validating driver "docker" against &{Name:no-preload-671620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-671620 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 17:51:57.406287 221317 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1028 17:51:57.406992 221317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1028 17:51:57.486992 221317 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 17:51:57.477593611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1028 17:51:57.487376 221317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1028 17:51:57.487413 221317 cni.go:84] Creating CNI manager for ""
I1028 17:51:57.487456 221317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1028 17:51:57.487515 221317 start.go:340] cluster config:
{Name:no-preload-671620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-671620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 17:51:57.489923 221317 out.go:177] * Starting "no-preload-671620" primary control-plane node in "no-preload-671620" cluster
I1028 17:51:57.491875 221317 cache.go:121] Beginning downloading kic base image for docker with containerd
I1028 17:51:57.493785 221317 out.go:177] * Pulling base image v0.0.45-1730110049-19872 ...
I1028 17:51:57.495637 221317 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1028 17:51:57.495781 221317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/config.json ...
I1028 17:51:57.496092 221317 cache.go:107] acquiring lock: {Name:mkc1889e206eda6e224891ce75a90606db80ad48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.496174 221317 cache.go:115] /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1028 17:51:57.496187 221317 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.077µs
I1028 17:51:57.496196 221317 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1028 17:51:57.496207 221317 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local docker daemon
I1028 17:51:57.496398 221317 cache.go:107] acquiring lock: {Name:mkbe39f94ce47fc6924fd6d7d9b0caa5dc6ce303 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.496450 221317 cache.go:115] /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
I1028 17:51:57.496461 221317 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 69.234µs
I1028 17:51:57.496469 221317 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
I1028 17:51:57.496517 221317 cache.go:107] acquiring lock: {Name:mk6e6f6f3941301e6a5f9fd93170bdc07d431f2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.496654 221317 cache.go:115] /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
I1028 17:51:57.496669 221317 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 153.713µs
I1028 17:51:57.496676 221317 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
I1028 17:51:57.496700 221317 cache.go:107] acquiring lock: {Name:mk354793a5b27fa4a1f0232522aa406c714d7993 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.496736 221317 cache.go:115] /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
I1028 17:51:57.496747 221317 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 61.398µs
I1028 17:51:57.496754 221317 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
I1028 17:51:57.496763 221317 cache.go:107] acquiring lock: {Name:mkfc0570b8e448c2f7b92efb40bbcd9d00afd8f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.496794 221317 cache.go:115] /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
I1028 17:51:57.496803 221317 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 40.878µs
I1028 17:51:57.496809 221317 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
I1028 17:51:57.496820 221317 cache.go:107] acquiring lock: {Name:mk9b4ce2c5865c4802c4745e438ea7793c4977d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.496851 221317 cache.go:115] /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
I1028 17:51:57.496860 221317 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 41.198µs
I1028 17:51:57.496866 221317 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
I1028 17:51:57.496875 221317 cache.go:107] acquiring lock: {Name:mk8b06fb95df9c3f6a6c852d7dd9f7f27ea08a2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.496905 221317 cache.go:115] /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
I1028 17:51:57.496913 221317 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 39.015µs
I1028 17:51:57.496919 221317 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
I1028 17:51:57.496938 221317 cache.go:107] acquiring lock: {Name:mk7b353df65769eceb1e3bab1d5659a44412a457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.496970 221317 cache.go:115] /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I1028 17:51:57.496979 221317 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 42.633µs
I1028 17:51:57.496985 221317 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19872-2421/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I1028 17:51:57.496992 221317 cache.go:87] Successfully saved all images to host disk.
I1028 17:51:57.518636 221317 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 in local docker daemon, skipping pull
I1028 17:51:57.518688 221317 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 exists in daemon, skipping load
I1028 17:51:57.518707 221317 cache.go:194] Successfully downloaded all kic artifacts
I1028 17:51:57.518735 221317 start.go:360] acquireMachinesLock for no-preload-671620: {Name:mkfee9526470ae7fcc7fe29e972c85b9ebc7ffd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 17:51:57.518794 221317 start.go:364] duration metric: took 41.485µs to acquireMachinesLock for "no-preload-671620"
I1028 17:51:57.518820 221317 start.go:96] Skipping create...Using existing machine configuration
I1028 17:51:57.518828 221317 fix.go:54] fixHost starting:
I1028 17:51:57.519092 221317 cli_runner.go:164] Run: docker container inspect no-preload-671620 --format={{.State.Status}}
I1028 17:51:57.535107 221317 fix.go:112] recreateIfNeeded on no-preload-671620: state=Stopped err=<nil>
W1028 17:51:57.535147 221317 fix.go:138] unexpected machine state, will restart: <nil>
I1028 17:51:57.537650 221317 out.go:177] * Restarting existing docker container for "no-preload-671620" ...
I1028 17:51:56.371855 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:58.372925 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:51:57.539622 221317 cli_runner.go:164] Run: docker start no-preload-671620
I1028 17:51:57.856034 221317 cli_runner.go:164] Run: docker container inspect no-preload-671620 --format={{.State.Status}}
I1028 17:51:57.889475 221317 kic.go:430] container "no-preload-671620" state is running.
I1028 17:51:57.889895 221317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671620
I1028 17:51:57.916635 221317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/config.json ...
I1028 17:51:57.916865 221317 machine.go:93] provisionDockerMachine start ...
I1028 17:51:57.916940 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:51:57.947536 221317 main.go:141] libmachine: Using SSH client type: native
I1028 17:51:57.947813 221317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 33071 <nil> <nil>}
I1028 17:51:57.947835 221317 main.go:141] libmachine: About to run SSH command:
hostname
I1028 17:51:57.948601 221317 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1028 17:52:01.095920 221317 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-671620
I1028 17:52:01.095946 221317 ubuntu.go:169] provisioning hostname "no-preload-671620"
I1028 17:52:01.096010 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:01.113921 221317 main.go:141] libmachine: Using SSH client type: native
I1028 17:52:01.114182 221317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 33071 <nil> <nil>}
I1028 17:52:01.114201 221317 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-671620 && echo "no-preload-671620" | sudo tee /etc/hostname
I1028 17:52:01.279696 221317 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-671620
I1028 17:52:01.279848 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:01.299229 221317 main.go:141] libmachine: Using SSH client type: native
I1028 17:52:01.299502 221317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 33071 <nil> <nil>}
I1028 17:52:01.299525 221317 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-671620' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-671620/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-671620' | sudo tee -a /etc/hosts;
fi
fi
I1028 17:52:01.453030 221317 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1028 17:52:01.453061 221317 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19872-2421/.minikube CaCertPath:/home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19872-2421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19872-2421/.minikube}
I1028 17:52:01.453093 221317 ubuntu.go:177] setting up certificates
I1028 17:52:01.453109 221317 provision.go:84] configureAuth start
I1028 17:52:01.453171 221317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671620
I1028 17:52:01.470990 221317 provision.go:143] copyHostCerts
I1028 17:52:01.471066 221317 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-2421/.minikube/ca.pem, removing ...
I1028 17:52:01.471076 221317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-2421/.minikube/ca.pem
I1028 17:52:01.471155 221317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19872-2421/.minikube/ca.pem (1082 bytes)
I1028 17:52:01.471245 221317 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-2421/.minikube/cert.pem, removing ...
I1028 17:52:01.471254 221317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-2421/.minikube/cert.pem
I1028 17:52:01.471279 221317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19872-2421/.minikube/cert.pem (1123 bytes)
I1028 17:52:01.471327 221317 exec_runner.go:144] found /home/jenkins/minikube-integration/19872-2421/.minikube/key.pem, removing ...
I1028 17:52:01.471339 221317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19872-2421/.minikube/key.pem
I1028 17:52:01.471363 221317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19872-2421/.minikube/key.pem (1679 bytes)
I1028 17:52:01.471415 221317 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19872-2421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca-key.pem org=jenkins.no-preload-671620 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-671620]
I1028 17:52:01.677376 221317 provision.go:177] copyRemoteCerts
I1028 17:52:01.677454 221317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1028 17:52:01.677500 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:01.696059 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:01.797573 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1028 17:52:01.824699 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1028 17:52:01.851432 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1028 17:52:01.879454 221317 provision.go:87] duration metric: took 426.329918ms to configureAuth
I1028 17:52:01.879490 221317 ubuntu.go:193] setting minikube options for container-runtime
I1028 17:52:01.879692 221317 config.go:182] Loaded profile config "no-preload-671620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 17:52:01.879703 221317 machine.go:96] duration metric: took 3.962830587s to provisionDockerMachine
I1028 17:52:01.879711 221317 start.go:293] postStartSetup for "no-preload-671620" (driver="docker")
I1028 17:52:01.879733 221317 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1028 17:52:01.879802 221317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1028 17:52:01.879845 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:01.898694 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:02.005956 221317 ssh_runner.go:195] Run: cat /etc/os-release
I1028 17:52:02.010611 221317 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1028 17:52:02.010701 221317 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1028 17:52:02.010728 221317 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1028 17:52:02.010751 221317 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1028 17:52:02.010778 221317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-2421/.minikube/addons for local assets ...
I1028 17:52:02.010873 221317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19872-2421/.minikube/files for local assets ...
I1028 17:52:02.010999 221317 filesync.go:149] local asset: /home/jenkins/minikube-integration/19872-2421/.minikube/files/etc/ssl/certs/78662.pem -> 78662.pem in /etc/ssl/certs
I1028 17:52:02.011151 221317 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1028 17:52:02.022409 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/files/etc/ssl/certs/78662.pem --> /etc/ssl/certs/78662.pem (1708 bytes)
I1028 17:52:02.047969 221317 start.go:296] duration metric: took 168.231295ms for postStartSetup
I1028 17:52:02.048118 221317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1028 17:52:02.048188 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:02.065317 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:02.165877 221317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1028 17:52:02.170608 221317 fix.go:56] duration metric: took 4.651771921s for fixHost
I1028 17:52:02.170635 221317 start.go:83] releasing machines lock for "no-preload-671620", held for 4.651826952s
I1028 17:52:02.170714 221317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671620
I1028 17:52:02.187524 221317 ssh_runner.go:195] Run: cat /version.json
I1028 17:52:02.187583 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:02.187640 221317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1028 17:52:02.187701 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:02.205313 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:02.213830 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:02.457328 221317 ssh_runner.go:195] Run: systemctl --version
I1028 17:52:02.461774 221317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1028 17:52:02.466406 221317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1028 17:52:02.484656 221317 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1028 17:52:02.484737 221317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1028 17:52:02.493611 221317 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1028 17:52:02.493633 221317 start.go:495] detecting cgroup driver to use...
I1028 17:52:02.493688 221317 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1028 17:52:02.493736 221317 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1028 17:52:02.507731 221317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1028 17:52:02.521558 221317 docker.go:217] disabling cri-docker service (if available) ...
I1028 17:52:02.521661 221317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1028 17:52:02.535426 221317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1028 17:52:02.547195 221317 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1028 17:52:02.627549 221317 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1028 17:52:02.713801 221317 docker.go:233] disabling docker service ...
I1028 17:52:02.713903 221317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1028 17:52:02.726926 221317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1028 17:52:02.739468 221317 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1028 17:52:02.833179 221317 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1028 17:52:02.924238 221317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1028 17:52:02.936096 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1028 17:52:02.952329 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1028 17:52:02.962421 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1028 17:52:02.972677 221317 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1028 17:52:02.972797 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1028 17:52:02.983207 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1028 17:52:02.992928 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1028 17:52:03.002587 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1028 17:52:03.015288 221317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1028 17:52:03.027515 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1028 17:52:03.039017 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1028 17:52:03.049915 221317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1028 17:52:03.060743 221317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1028 17:52:03.070733 221317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1028 17:52:03.079681 221317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 17:52:03.169183 221317 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1028 17:52:03.339197 221317 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1028 17:52:03.339270 221317 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1028 17:52:03.343799 221317 start.go:563] Will wait 60s for crictl version
I1028 17:52:03.343869 221317 ssh_runner.go:195] Run: which crictl
I1028 17:52:03.347381 221317 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1028 17:52:03.390115 221317 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1028 17:52:03.390186 221317 ssh_runner.go:195] Run: containerd --version
I1028 17:52:03.415580 221317 ssh_runner.go:195] Run: containerd --version
I1028 17:52:03.445210 221317 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
I1028 17:52:03.447658 221317 cli_runner.go:164] Run: docker network inspect no-preload-671620 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 17:52:03.463754 221317 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1028 17:52:03.467765 221317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1028 17:52:03.479190 221317 kubeadm.go:883] updating cluster {Name:no-preload-671620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-671620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1028 17:52:03.479319 221317 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1028 17:52:03.479368 221317 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 17:52:03.518115 221317 containerd.go:627] all images are preloaded for containerd runtime.
I1028 17:52:03.518141 221317 cache_images.go:84] Images are preloaded, skipping loading
I1028 17:52:03.518149 221317 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.2 containerd true true} ...
I1028 17:52:03.518267 221317 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-671620 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:no-preload-671620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1028 17:52:03.518333 221317 ssh_runner.go:195] Run: sudo crictl info
I1028 17:52:03.562207 221317 cni.go:84] Creating CNI manager for ""
I1028 17:52:03.562228 221317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1028 17:52:03.562239 221317 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1028 17:52:03.562264 221317 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671620 NodeName:no-preload-671620 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1028 17:52:03.562381 221317 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-671620"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1028 17:52:03.562449 221317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1028 17:52:03.572641 221317 binaries.go:44] Found k8s binaries, skipping transfer
I1028 17:52:03.572713 221317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1028 17:52:03.581989 221317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I1028 17:52:03.599996 221317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1028 17:52:03.619546 221317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
I1028 17:52:03.638148 221317 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1028 17:52:03.641781 221317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1028 17:52:03.652892 221317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 17:52:03.734321 221317 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1028 17:52:03.750334 221317 certs.go:68] Setting up /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620 for IP: 192.168.85.2
I1028 17:52:03.750358 221317 certs.go:194] generating shared ca certs ...
I1028 17:52:03.750375 221317 certs.go:226] acquiring lock for ca certs: {Name:mk3457f8c75e004d4fb7865e732d1b8d6b3cdec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 17:52:03.750587 221317 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19872-2421/.minikube/ca.key
I1028 17:52:03.750653 221317 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19872-2421/.minikube/proxy-client-ca.key
I1028 17:52:03.750666 221317 certs.go:256] generating profile certs ...
I1028 17:52:03.750781 221317 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/client.key
I1028 17:52:03.750878 221317 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/apiserver.key.6e1bbb6c
I1028 17:52:03.750957 221317 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/proxy-client.key
I1028 17:52:03.751116 221317 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/7866.pem (1338 bytes)
W1028 17:52:03.751167 221317 certs.go:480] ignoring /home/jenkins/minikube-integration/19872-2421/.minikube/certs/7866_empty.pem, impossibly tiny 0 bytes
I1028 17:52:03.751187 221317 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca-key.pem (1679 bytes)
I1028 17:52:03.751216 221317 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/ca.pem (1082 bytes)
I1028 17:52:03.751281 221317 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/cert.pem (1123 bytes)
I1028 17:52:03.751312 221317 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/certs/key.pem (1679 bytes)
I1028 17:52:03.751391 221317 certs.go:484] found cert: /home/jenkins/minikube-integration/19872-2421/.minikube/files/etc/ssl/certs/78662.pem (1708 bytes)
I1028 17:52:03.752080 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1028 17:52:03.783982 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1028 17:52:03.811204 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1028 17:52:03.845543 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1028 17:52:03.882710 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1028 17:52:03.916911 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1028 17:52:03.945122 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1028 17:52:03.977684 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/profiles/no-preload-671620/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1028 17:52:04.021682 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/files/etc/ssl/certs/78662.pem --> /usr/share/ca-certificates/78662.pem (1708 bytes)
I1028 17:52:04.055440 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1028 17:52:04.084011 221317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19872-2421/.minikube/certs/7866.pem --> /usr/share/ca-certificates/7866.pem (1338 bytes)
I1028 17:52:04.110864 221317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1028 17:52:04.133704 221317 ssh_runner.go:195] Run: openssl version
I1028 17:52:04.146029 221317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78662.pem && ln -fs /usr/share/ca-certificates/78662.pem /etc/ssl/certs/78662.pem"
I1028 17:52:04.156755 221317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78662.pem
I1028 17:52:04.160339 221317 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 17:09 /usr/share/ca-certificates/78662.pem
I1028 17:52:04.160418 221317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78662.pem
I1028 17:52:04.167325 221317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78662.pem /etc/ssl/certs/3ec20f2e.0"
I1028 17:52:04.176801 221317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1028 17:52:04.186656 221317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1028 17:52:04.190407 221317 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 17:01 /usr/share/ca-certificates/minikubeCA.pem
I1028 17:52:04.190474 221317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1028 17:52:04.197775 221317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1028 17:52:04.208149 221317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7866.pem && ln -fs /usr/share/ca-certificates/7866.pem /etc/ssl/certs/7866.pem"
I1028 17:52:04.217986 221317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7866.pem
I1028 17:52:04.221895 221317 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 17:09 /usr/share/ca-certificates/7866.pem
I1028 17:52:04.221958 221317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7866.pem
I1028 17:52:04.229366 221317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7866.pem /etc/ssl/certs/51391683.0"
I1028 17:52:04.239107 221317 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1028 17:52:04.242836 221317 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1028 17:52:04.250019 221317 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1028 17:52:04.257161 221317 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1028 17:52:04.264170 221317 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1028 17:52:04.271995 221317 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1028 17:52:04.279263 221317 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1028 17:52:04.286507 221317 kubeadm.go:392] StartCluster: {Name:no-preload-671620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730110049-19872@sha256:ead1232eaf026cb87df5e47192f652e5fef1a28f5a05a9ac9cb1a241cca351e9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-671620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 17:52:04.286600 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1028 17:52:04.286658 221317 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1028 17:52:04.333628 221317 cri.go:89] found id: "d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4"
I1028 17:52:04.333653 221317 cri.go:89] found id: "efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f"
I1028 17:52:04.333658 221317 cri.go:89] found id: "a4d195b29434f15222415b7d38550a89522c428c0a9774aacc5c9c02b18b207c"
I1028 17:52:04.333680 221317 cri.go:89] found id: "9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990"
I1028 17:52:04.333685 221317 cri.go:89] found id: "85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae"
I1028 17:52:04.333689 221317 cri.go:89] found id: "33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d"
I1028 17:52:04.333692 221317 cri.go:89] found id: "a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22"
I1028 17:52:04.333695 221317 cri.go:89] found id: "eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634"
I1028 17:52:04.333698 221317 cri.go:89] found id: ""
I1028 17:52:04.333773 221317 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1028 17:52:04.349236 221317 cri.go:116] JSON = null
W1028 17:52:04.349303 221317 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I1028 17:52:04.349369 221317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1028 17:52:04.361022 221317 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1028 17:52:04.361090 221317 kubeadm.go:593] restartPrimaryControlPlane start ...
I1028 17:52:04.361172 221317 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1028 17:52:04.376190 221317 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1028 17:52:04.376895 221317 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671620" does not appear in /home/jenkins/minikube-integration/19872-2421/kubeconfig
I1028 17:52:04.377225 221317 kubeconfig.go:62] /home/jenkins/minikube-integration/19872-2421/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671620" cluster setting kubeconfig missing "no-preload-671620" context setting]
I1028 17:52:04.377792 221317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-2421/kubeconfig: {Name:mk5d0caa294b9d2ca80eaff01c0ffe9a532db746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 17:52:04.379987 221317 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1028 17:52:04.400583 221317 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I1028 17:52:04.400658 221317 kubeadm.go:597] duration metric: took 39.549104ms to restartPrimaryControlPlane
I1028 17:52:04.400683 221317 kubeadm.go:394] duration metric: took 114.18446ms to StartCluster
I1028 17:52:04.400723 221317 settings.go:142] acquiring lock: {Name:mk543e20b5eb6c4236b6d62e05ff811d7fc9498d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 17:52:04.400801 221317 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19872-2421/kubeconfig
I1028 17:52:04.401767 221317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19872-2421/kubeconfig: {Name:mk5d0caa294b9d2ca80eaff01c0ffe9a532db746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 17:52:04.402050 221317 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1028 17:52:04.402472 221317 config.go:182] Loaded profile config "no-preload-671620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 17:52:04.402493 221317 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1028 17:52:04.402811 221317 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671620"
I1028 17:52:04.402828 221317 addons.go:234] Setting addon storage-provisioner=true in "no-preload-671620"
W1028 17:52:04.402834 221317 addons.go:243] addon storage-provisioner should already be in state true
I1028 17:52:04.402870 221317 host.go:66] Checking if "no-preload-671620" exists ...
I1028 17:52:04.403357 221317 cli_runner.go:164] Run: docker container inspect no-preload-671620 --format={{.State.Status}}
I1028 17:52:04.403584 221317 addons.go:69] Setting default-storageclass=true in profile "no-preload-671620"
I1028 17:52:04.403626 221317 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671620"
I1028 17:52:04.403839 221317 addons.go:69] Setting metrics-server=true in profile "no-preload-671620"
I1028 17:52:04.403870 221317 addons.go:234] Setting addon metrics-server=true in "no-preload-671620"
W1028 17:52:04.403890 221317 addons.go:243] addon metrics-server should already be in state true
I1028 17:52:04.403925 221317 host.go:66] Checking if "no-preload-671620" exists ...
I1028 17:52:04.404343 221317 cli_runner.go:164] Run: docker container inspect no-preload-671620 --format={{.State.Status}}
I1028 17:52:04.404580 221317 cli_runner.go:164] Run: docker container inspect no-preload-671620 --format={{.State.Status}}
I1028 17:52:04.405032 221317 addons.go:69] Setting dashboard=true in profile "no-preload-671620"
I1028 17:52:04.405055 221317 addons.go:234] Setting addon dashboard=true in "no-preload-671620"
W1028 17:52:04.405062 221317 addons.go:243] addon dashboard should already be in state true
I1028 17:52:04.405102 221317 host.go:66] Checking if "no-preload-671620" exists ...
I1028 17:52:04.405586 221317 cli_runner.go:164] Run: docker container inspect no-preload-671620 --format={{.State.Status}}
I1028 17:52:04.413176 221317 out.go:177] * Verifying Kubernetes components...
I1028 17:52:04.417909 221317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 17:52:04.476222 221317 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1028 17:52:04.478633 221317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:52:04.478655 221317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1028 17:52:04.478750 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:04.483740 221317 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1028 17:52:04.484238 221317 addons.go:234] Setting addon default-storageclass=true in "no-preload-671620"
W1028 17:52:04.484252 221317 addons.go:243] addon default-storageclass should already be in state true
I1028 17:52:04.484288 221317 host.go:66] Checking if "no-preload-671620" exists ...
I1028 17:52:04.484872 221317 cli_runner.go:164] Run: docker container inspect no-preload-671620 --format={{.State.Status}}
I1028 17:52:04.488505 221317 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1028 17:52:04.488524 221317 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1028 17:52:04.488729 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:04.490838 221317 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1028 17:52:04.495286 221317 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1028 17:52:00.382717 215146 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:01.374686 215146 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:01.374713 215146 pod_ready.go:82] duration metric: took 1m1.009759232s for pod "kube-controller-manager-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:52:01.374725 215146 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zzhjw" in "kube-system" namespace to be "Ready" ...
I1028 17:52:01.380398 215146 pod_ready.go:93] pod "kube-proxy-zzhjw" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:01.380426 215146 pod_ready.go:82] duration metric: took 5.693725ms for pod "kube-proxy-zzhjw" in "kube-system" namespace to be "Ready" ...
I1028 17:52:01.380438 215146 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:52:03.386921 215146 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:04.501657 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1028 17:52:04.501687 221317 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1028 17:52:04.501768 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:04.546839 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:04.571009 221317 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1028 17:52:04.571029 221317 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1028 17:52:04.571094 221317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671620
I1028 17:52:04.589512 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:04.591368 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:04.601102 221317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/19872-2421/.minikube/machines/no-preload-671620/id_rsa Username:docker}
I1028 17:52:04.657471 221317 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1028 17:52:04.740706 221317 node_ready.go:35] waiting up to 6m0s for node "no-preload-671620" to be "Ready" ...
I1028 17:52:04.812037 221317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1028 17:52:04.812107 221317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1028 17:52:04.843353 221317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1028 17:52:04.846378 221317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1028 17:52:04.846450 221317 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1028 17:52:04.870370 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1028 17:52:04.870454 221317 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1028 17:52:04.956513 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1028 17:52:04.956652 221317 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1028 17:52:04.969335 221317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:52:05.020981 221317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1028 17:52:05.021065 221317 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1028 17:52:05.103292 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1028 17:52:05.103376 221317 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1028 17:52:05.205366 221317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1028 17:52:05.349719 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1028 17:52:05.349801 221317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W1028 17:52:05.402655 221317 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1028 17:52:05.402700 221317 retry.go:31] will retry after 245.186028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
W1028 17:52:05.402767 221317 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1028 17:52:05.402774 221317 retry.go:31] will retry after 185.88728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1028 17:52:05.446938 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1028 17:52:05.446961 221317 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1028 17:52:05.502637 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1028 17:52:05.502676 221317 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1028 17:52:05.584338 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1028 17:52:05.584359 221317 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1028 17:52:05.589608 221317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 17:52:05.648983 221317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 17:52:05.665554 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1028 17:52:05.665627 221317 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1028 17:52:05.821258 221317 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1028 17:52:05.821333 221317 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1028 17:52:05.881708 221317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1028 17:52:05.387165 215146 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:07.887839 215146 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:09.396099 215146 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:09.396169 215146 pod_ready.go:82] duration metric: took 8.015721675s for pod "kube-scheduler-old-k8s-version-743648" in "kube-system" namespace to be "Ready" ...
I1028 17:52:09.396196 215146 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace to be "Ready" ...
I1028 17:52:09.994045 221317 node_ready.go:49] node "no-preload-671620" has status "Ready":"True"
I1028 17:52:09.994078 221317 node_ready.go:38] duration metric: took 5.25333272s for node "no-preload-671620" to be "Ready" ...
I1028 17:52:09.994089 221317 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1028 17:52:10.018735 221317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-62xsk" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.032752 221317 pod_ready.go:93] pod "coredns-7c65d6cfc9-62xsk" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:10.032792 221317 pod_ready.go:82] duration metric: took 14.00996ms for pod "coredns-7c65d6cfc9-62xsk" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.032806 221317 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-671620" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.055733 221317 pod_ready.go:93] pod "etcd-no-preload-671620" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:10.055760 221317 pod_ready.go:82] duration metric: took 22.947806ms for pod "etcd-no-preload-671620" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.055778 221317 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-671620" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.073877 221317 pod_ready.go:93] pod "kube-apiserver-no-preload-671620" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:10.073905 221317 pod_ready.go:82] duration metric: took 18.119134ms for pod "kube-apiserver-no-preload-671620" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.073918 221317 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-671620" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.087467 221317 pod_ready.go:93] pod "kube-controller-manager-no-preload-671620" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:10.087495 221317 pod_ready.go:82] duration metric: took 13.569113ms for pod "kube-controller-manager-no-preload-671620" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.087508 221317 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2nnd8" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.214730 221317 pod_ready.go:93] pod "kube-proxy-2nnd8" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:10.214765 221317 pod_ready.go:82] duration metric: took 127.24982ms for pod "kube-proxy-2nnd8" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.214778 221317 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-671620" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.598275 221317 pod_ready.go:93] pod "kube-scheduler-no-preload-671620" in "kube-system" namespace has status "Ready":"True"
I1028 17:52:10.598307 221317 pod_ready.go:82] duration metric: took 383.521759ms for pod "kube-scheduler-no-preload-671620" in "kube-system" namespace to be "Ready" ...
I1028 17:52:10.598320 221317 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace to be "Ready" ...
I1028 17:52:12.606587 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:12.763641 221317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.558189123s)
I1028 17:52:12.763767 221317 addons.go:475] Verifying addon metrics-server=true in "no-preload-671620"
I1028 17:52:12.828187 221317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.238499503s)
I1028 17:52:12.828238 221317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.179185841s)
I1028 17:52:12.828514 221317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.946729135s)
I1028 17:52:12.831542 221317 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-671620 addons enable metrics-server
I1028 17:52:12.835449 221317 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
I1028 17:52:11.403439 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:13.903256 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:12.837931 221317 addons.go:510] duration metric: took 8.435433917s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
I1028 17:52:15.105930 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:15.905755 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:18.402603 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:17.606087 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:19.611821 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:22.105083 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:20.409296 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:22.903578 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:24.105507 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:26.604921 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:25.403212 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:27.903456 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:29.103863 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:31.105700 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:30.402618 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:32.903183 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:33.604390 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:35.605270 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:35.402894 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:37.402921 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:39.407728 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:38.105470 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:40.105739 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:42.110389 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:41.902217 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:43.902841 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:44.603774 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:46.604782 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:46.402949 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:48.403172 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:48.605501 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:50.606074 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:50.902528 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:52.904605 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:53.105730 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:55.105836 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:57.106182 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:55.401888 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:57.404216 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:59.901848 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:52:59.604831 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:01.605205 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:01.902156 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:03.903376 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:03.605451 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:06.105655 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:06.404303 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:08.904591 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:08.106357 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:10.604089 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:11.402296 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:13.903797 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:12.604820 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:14.605114 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:16.606189 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:16.402739 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:18.903188 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:19.104274 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:21.605732 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:21.401986 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:23.402635 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:24.105279 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:26.105338 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:25.902538 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:27.902722 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:29.903115 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:28.105933 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:30.116012 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:31.903725 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:34.403587 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:32.605975 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:35.105500 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:36.403772 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:38.903738 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:37.603727 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:39.604328 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:41.605079 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:41.401810 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:43.404321 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:43.607400 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:46.103761 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:45.404851 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:47.902832 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:49.903288 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:48.105018 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:50.105305 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:52.403120 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:54.902750 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:52.604388 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:54.605083 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:56.624244 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:56.902993 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:59.403935 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:53:59.104739 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:01.105380 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:01.405887 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:03.902918 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:03.604662 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:06.105835 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:06.402740 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:08.901955 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:08.605084 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:10.605668 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:10.903113 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:13.404332 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:13.104991 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:15.105568 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:15.901937 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:17.903008 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:19.903497 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:17.604796 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:19.604845 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:21.605248 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:22.402932 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:24.903029 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:24.105665 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:26.605285 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:27.402751 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:29.408164 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:29.105269 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:31.105655 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:31.902254 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:33.902654 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:33.604818 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:35.605314 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:36.403234 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:38.902989 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:37.606076 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:40.105916 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:42.107486 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:41.403337 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:43.904604 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:44.605529 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:47.104863 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:45.911071 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:48.402130 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:49.105939 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:51.605910 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:50.402970 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:52.904597 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:54.104814 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:56.105477 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:55.402156 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:57.402401 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:59.409121 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:54:58.105576 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:00.155323 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:01.902400 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:03.902455 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:02.605190 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:05.105466 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:05.902611 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:07.903035 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:07.605015 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:10.105600 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:10.403203 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:12.902646 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:14.904525 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:12.603935 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:14.604878 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:17.105017 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:17.403120 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:19.403710 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:19.606854 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:22.104941 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:21.902564 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:24.403100 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:24.106380 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:26.605225 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:26.902735 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:28.902857 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:29.105257 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:31.106142 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:31.402331 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:33.402846 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:33.604390 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:35.604427 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:35.902772 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:37.908704 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:38.105021 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:40.105209 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:42.106886 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:40.402250 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:42.403352 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:44.902191 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:44.605371 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:46.605673 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:46.902393 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:48.902433 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:49.104438 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:51.105755 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:51.402737 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:53.903135 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:53.605525 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:56.104811 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:55.904316 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:58.403625 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:55:58.112377 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:00.179960 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:00.427009 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:02.902449 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:04.903621 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:02.604228 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:04.605206 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:07.104297 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:06.904653 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:09.402306 215146 pod_ready.go:103] pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:09.402337 215146 pod_ready.go:82] duration metric: took 4m0.00611291s for pod "metrics-server-9975d5f86-4qgm8" in "kube-system" namespace to be "Ready" ...
E1028 17:56:09.402348 215146 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1028 17:56:09.402356 215146 pod_ready.go:39] duration metric: took 5m20.571187986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1028 17:56:09.402371 215146 api_server.go:52] waiting for apiserver process to appear ...
I1028 17:56:09.402405 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 17:56:09.402471 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 17:56:09.448538 215146 cri.go:89] found id: "be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6"
I1028 17:56:09.448610 215146 cri.go:89] found id: "c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:56:09.448617 215146 cri.go:89] found id: ""
I1028 17:56:09.448624 215146 logs.go:282] 2 containers: [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6 c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2]
I1028 17:56:09.448679 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.453245 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.456632 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 17:56:09.456702 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 17:56:09.501220 215146 cri.go:89] found id: "ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80"
I1028 17:56:09.501288 215146 cri.go:89] found id: "48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:56:09.501299 215146 cri.go:89] found id: ""
I1028 17:56:09.501307 215146 logs.go:282] 2 containers: [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80 48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420]
I1028 17:56:09.501362 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.505290 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.508437 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 17:56:09.508530 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 17:56:09.550712 215146 cri.go:89] found id: "0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522"
I1028 17:56:09.550740 215146 cri.go:89] found id: "697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:56:09.550748 215146 cri.go:89] found id: ""
I1028 17:56:09.550756 215146 logs.go:282] 2 containers: [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522 697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e]
I1028 17:56:09.550814 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.554604 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.558130 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 17:56:09.558217 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 17:56:09.596423 215146 cri.go:89] found id: "0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca"
I1028 17:56:09.596441 215146 cri.go:89] found id: "e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:56:09.596446 215146 cri.go:89] found id: ""
I1028 17:56:09.596453 215146 logs.go:282] 2 containers: [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98]
I1028 17:56:09.596506 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.603505 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.607560 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 17:56:09.607627 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 17:56:09.644117 215146 cri.go:89] found id: "251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b"
I1028 17:56:09.644141 215146 cri.go:89] found id: "f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:56:09.644146 215146 cri.go:89] found id: ""
I1028 17:56:09.644153 215146 logs.go:282] 2 containers: [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1]
I1028 17:56:09.644207 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.647803 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.651052 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 17:56:09.651119 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 17:56:09.691655 215146 cri.go:89] found id: "45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6"
I1028 17:56:09.691683 215146 cri.go:89] found id: "07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:56:09.691687 215146 cri.go:89] found id: ""
I1028 17:56:09.691694 215146 logs.go:282] 2 containers: [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6 07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45]
I1028 17:56:09.691755 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.695508 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.699134 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 17:56:09.699244 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 17:56:09.738913 215146 cri.go:89] found id: "281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572"
I1028 17:56:09.738939 215146 cri.go:89] found id: "eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:56:09.738944 215146 cri.go:89] found id: ""
I1028 17:56:09.738951 215146 logs.go:282] 2 containers: [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572 eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d]
I1028 17:56:09.739019 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.742717 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.746117 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 17:56:09.746199 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 17:56:09.784046 215146 cri.go:89] found id: "616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245"
I1028 17:56:09.784070 215146 cri.go:89] found id: ""
I1028 17:56:09.784078 215146 logs.go:282] 1 containers: [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245]
I1028 17:56:09.784132 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.787696 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 17:56:09.787759 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 17:56:09.835450 215146 cri.go:89] found id: "1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220"
I1028 17:56:09.835472 215146 cri.go:89] found id: "7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78"
I1028 17:56:09.835477 215146 cri.go:89] found id: ""
I1028 17:56:09.835484 215146 logs.go:282] 2 containers: [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220 7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78]
I1028 17:56:09.835550 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.839920 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:09.844047 215146 logs.go:123] Gathering logs for coredns [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522] ...
I1028 17:56:09.844069 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522"
I1028 17:56:09.887075 215146 logs.go:123] Gathering logs for kube-proxy [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b] ...
I1028 17:56:09.887100 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b"
I1028 17:56:09.930757 215146 logs.go:123] Gathering logs for kube-controller-manager [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6] ...
I1028 17:56:09.930786 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6"
I1028 17:56:09.989516 215146 logs.go:123] Gathering logs for kube-controller-manager [07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45] ...
I1028 17:56:09.989550 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:56:09.114497 221317 pod_ready.go:103] pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace has status "Ready":"False"
I1028 17:56:10.606234 221317 pod_ready.go:82] duration metric: took 4m0.007899071s for pod "metrics-server-6867b74b74-5xptn" in "kube-system" namespace to be "Ready" ...
E1028 17:56:10.606260 221317 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1028 17:56:10.606270 221317 pod_ready.go:39] duration metric: took 4m0.612170231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1028 17:56:10.606286 221317 api_server.go:52] waiting for apiserver process to appear ...
I1028 17:56:10.606330 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 17:56:10.606407 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 17:56:10.666360 221317 cri.go:89] found id: "1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15"
I1028 17:56:10.666437 221317 cri.go:89] found id: "33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d"
I1028 17:56:10.666470 221317 cri.go:89] found id: ""
I1028 17:56:10.666491 221317 logs.go:282] 2 containers: [1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15 33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d]
I1028 17:56:10.666601 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.673256 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.678769 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 17:56:10.678853 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 17:56:10.759553 221317 cri.go:89] found id: "9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a"
I1028 17:56:10.759572 221317 cri.go:89] found id: "eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634"
I1028 17:56:10.759577 221317 cri.go:89] found id: ""
I1028 17:56:10.759584 221317 logs.go:282] 2 containers: [9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634]
I1028 17:56:10.759638 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.763829 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.767612 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 17:56:10.767685 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 17:56:10.839977 221317 cri.go:89] found id: "8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e"
I1028 17:56:10.839996 221317 cri.go:89] found id: "d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4"
I1028 17:56:10.840001 221317 cri.go:89] found id: ""
I1028 17:56:10.840008 221317 logs.go:282] 2 containers: [8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4]
I1028 17:56:10.840062 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.843760 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.847814 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 17:56:10.847887 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 17:56:10.911634 221317 cri.go:89] found id: "696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7"
I1028 17:56:10.911655 221317 cri.go:89] found id: "85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae"
I1028 17:56:10.911660 221317 cri.go:89] found id: ""
I1028 17:56:10.911668 221317 logs.go:282] 2 containers: [696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7 85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae]
I1028 17:56:10.911726 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.916725 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.920643 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 17:56:10.920712 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 17:56:10.978268 221317 cri.go:89] found id: "7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca"
I1028 17:56:10.978286 221317 cri.go:89] found id: "9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990"
I1028 17:56:10.978291 221317 cri.go:89] found id: ""
I1028 17:56:10.978310 221317 logs.go:282] 2 containers: [7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca 9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990]
I1028 17:56:10.978366 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.983423 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:10.987335 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 17:56:10.987401 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 17:56:11.062534 221317 cri.go:89] found id: "db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3"
I1028 17:56:11.062554 221317 cri.go:89] found id: "a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22"
I1028 17:56:11.062559 221317 cri.go:89] found id: ""
I1028 17:56:11.062566 221317 logs.go:282] 2 containers: [db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3 a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22]
I1028 17:56:11.062633 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:11.073005 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:11.079278 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 17:56:11.079366 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 17:56:11.155252 221317 cri.go:89] found id: "c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080"
I1028 17:56:11.155283 221317 cri.go:89] found id: "efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f"
I1028 17:56:11.155289 221317 cri.go:89] found id: ""
I1028 17:56:11.155297 221317 logs.go:282] 2 containers: [c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080 efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f]
I1028 17:56:11.155371 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:11.165343 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:11.177230 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 17:56:11.177356 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 17:56:11.250284 221317 cri.go:89] found id: "45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd"
I1028 17:56:11.250303 221317 cri.go:89] found id: ""
I1028 17:56:11.250311 221317 logs.go:282] 1 containers: [45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd]
I1028 17:56:11.250376 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:11.254624 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 17:56:11.254699 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 17:56:11.313196 221317 cri.go:89] found id: "488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8"
I1028 17:56:11.313217 221317 cri.go:89] found id: "f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345"
I1028 17:56:11.313222 221317 cri.go:89] found id: ""
I1028 17:56:11.313229 221317 logs.go:282] 2 containers: [488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8 f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345]
I1028 17:56:11.313284 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:11.317578 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:11.321727 221317 logs.go:123] Gathering logs for kube-apiserver [1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15] ...
I1028 17:56:11.321751 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15"
I1028 17:56:11.413482 221317 logs.go:123] Gathering logs for coredns [8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e] ...
I1028 17:56:11.413516 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e"
I1028 17:56:11.457061 221317 logs.go:123] Gathering logs for coredns [d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4] ...
I1028 17:56:11.457096 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4"
I1028 17:56:11.500824 221317 logs.go:123] Gathering logs for kube-scheduler [696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7] ...
I1028 17:56:11.500850 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7"
I1028 17:56:11.543919 221317 logs.go:123] Gathering logs for kube-scheduler [85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae] ...
I1028 17:56:11.544000 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae"
I1028 17:56:11.594634 221317 logs.go:123] Gathering logs for kube-proxy [9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990] ...
I1028 17:56:11.594668 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990"
I1028 17:56:11.643652 221317 logs.go:123] Gathering logs for storage-provisioner [f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345] ...
I1028 17:56:11.643681 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345"
I1028 17:56:11.698905 221317 logs.go:123] Gathering logs for kube-apiserver [33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d] ...
I1028 17:56:11.698974 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d"
I1028 17:56:11.768759 221317 logs.go:123] Gathering logs for kube-proxy [7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca] ...
I1028 17:56:11.768797 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca"
I1028 17:56:11.810536 221317 logs.go:123] Gathering logs for kindnet [c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080] ...
I1028 17:56:11.810563 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080"
I1028 17:56:11.863285 221317 logs.go:123] Gathering logs for etcd [9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a] ...
I1028 17:56:11.863315 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a"
I1028 17:56:11.919903 221317 logs.go:123] Gathering logs for etcd [eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634] ...
I1028 17:56:11.919991 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634"
I1028 17:56:11.966957 221317 logs.go:123] Gathering logs for kube-controller-manager [a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22] ...
I1028 17:56:11.966988 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22"
I1028 17:56:12.045742 221317 logs.go:123] Gathering logs for kubernetes-dashboard [45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd] ...
I1028 17:56:12.045784 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd"
I1028 17:56:12.091510 221317 logs.go:123] Gathering logs for containerd ...
I1028 17:56:12.091537 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 17:56:12.154140 221317 logs.go:123] Gathering logs for container status ...
I1028 17:56:12.154176 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 17:56:12.214893 221317 logs.go:123] Gathering logs for kubelet ...
I1028 17:56:12.214923 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1028 17:56:10.075876 215146 logs.go:123] Gathering logs for storage-provisioner [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220] ...
I1028 17:56:10.075913 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220"
I1028 17:56:10.131752 215146 logs.go:123] Gathering logs for container status ...
I1028 17:56:10.131782 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 17:56:10.189774 215146 logs.go:123] Gathering logs for kubelet ...
I1028 17:56:10.189811 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1028 17:56:10.260140 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.823989 666 reflector.go:138] object-"kube-system"/"metrics-server-token-s8546": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-s8546" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.260375 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.825360 666 reflector.go:138] object-"kube-system"/"kindnet-token-rzkm7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rzkm7" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.260691 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.826686 666 reflector.go:138] object-"kube-system"/"coredns-token-bhrx5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-bhrx5" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.260895 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.827119 666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.261159 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.827457 666 reflector.go:138] object-"kube-system"/"kube-proxy-token-j55lg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j55lg" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.263578 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862535 666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-42rw4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-42rw4" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.263789 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862690 666 reflector.go:138] object-"default"/"default-token-8g42r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8g42r" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.265241 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862027 666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:10.273845 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:52 old-k8s-version-743648 kubelet[666]: E1028 17:50:52.034873 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.274042 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:52 old-k8s-version-743648 kubelet[666]: E1028 17:50:52.426336 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.276900 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:04 old-k8s-version-743648 kubelet[666]: E1028 17:51:04.181734 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.278592 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:15 old-k8s-version-743648 kubelet[666]: E1028 17:51:15.145992 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.279523 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:20 old-k8s-version-743648 kubelet[666]: E1028 17:51:20.589441 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.279852 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:21 old-k8s-version-743648 kubelet[666]: E1028 17:51:21.593639 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.280289 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:23 old-k8s-version-743648 kubelet[666]: E1028 17:51:23.609906 666 pod_workers.go:191] Error syncing pod 8ffc3abd-c784-474f-80a5-f6a8b25abc51 ("storage-provisioner_kube-system(8ffc3abd-c784-474f-80a5-f6a8b25abc51)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8ffc3abd-c784-474f-80a5-f6a8b25abc51)"
W1028 17:56:10.280628 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:25 old-k8s-version-743648 kubelet[666]: E1028 17:51:25.775266 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.283444 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:29 old-k8s-version-743648 kubelet[666]: E1028 17:51:29.147486 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.283892 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:41 old-k8s-version-743648 kubelet[666]: E1028 17:51:41.138852 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.284352 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:41 old-k8s-version-743648 kubelet[666]: E1028 17:51:41.660884 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.284688 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:45 old-k8s-version-743648 kubelet[666]: E1028 17:51:45.774632 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.284874 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:52 old-k8s-version-743648 kubelet[666]: E1028 17:51:52.138242 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.285204 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:59 old-k8s-version-743648 kubelet[666]: E1028 17:51:59.137591 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.285395 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:05 old-k8s-version-743648 kubelet[666]: E1028 17:52:05.138208 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.285982 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:14 old-k8s-version-743648 kubelet[666]: E1028 17:52:14.756224 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.286310 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:15 old-k8s-version-743648 kubelet[666]: E1028 17:52:15.774720 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.288793 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:17 old-k8s-version-743648 kubelet[666]: E1028 17:52:17.148693 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.289126 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:28 old-k8s-version-743648 kubelet[666]: E1028 17:52:28.141800 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.289309 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:32 old-k8s-version-743648 kubelet[666]: E1028 17:52:32.138343 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.289637 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:40 old-k8s-version-743648 kubelet[666]: E1028 17:52:40.138205 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.289820 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:44 old-k8s-version-743648 kubelet[666]: E1028 17:52:44.140775 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.290404 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:55 old-k8s-version-743648 kubelet[666]: E1028 17:52:55.867579 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.290588 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:58 old-k8s-version-743648 kubelet[666]: E1028 17:52:58.138213 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.290916 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:05 old-k8s-version-743648 kubelet[666]: E1028 17:53:05.775406 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.291101 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:13 old-k8s-version-743648 kubelet[666]: E1028 17:53:13.138098 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.291427 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:17 old-k8s-version-743648 kubelet[666]: E1028 17:53:17.137660 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.291608 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:25 old-k8s-version-743648 kubelet[666]: E1028 17:53:25.138080 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.291950 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:28 old-k8s-version-743648 kubelet[666]: E1028 17:53:28.141572 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.294378 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:39 old-k8s-version-743648 kubelet[666]: E1028 17:53:39.147007 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:10.294718 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:43 old-k8s-version-743648 kubelet[666]: E1028 17:53:43.137608 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.294904 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:52 old-k8s-version-743648 kubelet[666]: E1028 17:53:52.138963 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.295229 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:57 old-k8s-version-743648 kubelet[666]: E1028 17:53:57.137666 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.295414 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:03 old-k8s-version-743648 kubelet[666]: E1028 17:54:03.139810 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.295738 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:08 old-k8s-version-743648 kubelet[666]: E1028 17:54:08.137748 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.295926 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:15 old-k8s-version-743648 kubelet[666]: E1028 17:54:15.138210 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.296514 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:22 old-k8s-version-743648 kubelet[666]: E1028 17:54:22.110858 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.296852 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:25 old-k8s-version-743648 kubelet[666]: E1028 17:54:25.774510 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.297038 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:26 old-k8s-version-743648 kubelet[666]: E1028 17:54:26.142477 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.297362 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:36 old-k8s-version-743648 kubelet[666]: E1028 17:54:36.140263 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.297545 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:40 old-k8s-version-743648 kubelet[666]: E1028 17:54:40.138696 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.297872 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:51 old-k8s-version-743648 kubelet[666]: E1028 17:54:51.137643 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.298055 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:54 old-k8s-version-743648 kubelet[666]: E1028 17:54:54.140745 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.298386 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:05 old-k8s-version-743648 kubelet[666]: E1028 17:55:05.137675 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.298570 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:06 old-k8s-version-743648 kubelet[666]: E1028 17:55:06.141555 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.298895 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.142304 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.299079 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.144881 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.299262 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.145457 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.299586 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.148252 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.299910 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.143236 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.300093 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:10.300421 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:10.300632 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1028 17:56:10.300643 215146 logs.go:123] Gathering logs for describe nodes ...
I1028 17:56:10.300657 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 17:56:10.461900 215146 logs.go:123] Gathering logs for etcd [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80] ...
I1028 17:56:10.461929 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80"
I1028 17:56:10.543636 215146 logs.go:123] Gathering logs for etcd [48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420] ...
I1028 17:56:10.543667 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:56:10.596502 215146 logs.go:123] Gathering logs for kube-scheduler [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca] ...
I1028 17:56:10.596534 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca"
I1028 17:56:10.668010 215146 logs.go:123] Gathering logs for kube-proxy [f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1] ...
I1028 17:56:10.668084 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:56:10.718076 215146 logs.go:123] Gathering logs for containerd ...
I1028 17:56:10.718154 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 17:56:10.799093 215146 logs.go:123] Gathering logs for dmesg ...
I1028 17:56:10.799138 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 17:56:10.817662 215146 logs.go:123] Gathering logs for kube-apiserver [c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2] ...
I1028 17:56:10.817695 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:56:10.898218 215146 logs.go:123] Gathering logs for kindnet [eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d] ...
I1028 17:56:10.898249 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:56:10.949120 215146 logs.go:123] Gathering logs for kubernetes-dashboard [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245] ...
I1028 17:56:10.949162 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245"
I1028 17:56:11.003335 215146 logs.go:123] Gathering logs for kube-apiserver [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6] ...
I1028 17:56:11.003364 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6"
I1028 17:56:11.097068 215146 logs.go:123] Gathering logs for kindnet [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572] ...
I1028 17:56:11.097153 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572"
I1028 17:56:11.199799 215146 logs.go:123] Gathering logs for storage-provisioner [7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78] ...
I1028 17:56:11.199905 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78"
I1028 17:56:11.294982 215146 logs.go:123] Gathering logs for coredns [697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e] ...
I1028 17:56:11.295052 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:56:11.350232 215146 logs.go:123] Gathering logs for kube-scheduler [e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98] ...
I1028 17:56:11.350305 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:56:11.409560 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:56:11.409639 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1028 17:56:11.409735 215146 out.go:270] X Problems detected in kubelet:
W1028 17:56:11.409781 215146 out.go:270] Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.148252 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:11.409949 215146 out.go:270] Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.143236 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:11.409983 215146 out.go:270] Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:11.410035 215146 out.go:270] Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:11.410082 215146 out.go:270] Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1028 17:56:11.410132 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:56:11.410153 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:56:12.297195 221317 logs.go:123] Gathering logs for dmesg ...
I1028 17:56:12.297229 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 17:56:12.313620 221317 logs.go:123] Gathering logs for describe nodes ...
I1028 17:56:12.313648 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 17:56:12.469140 221317 logs.go:123] Gathering logs for kube-controller-manager [db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3] ...
I1028 17:56:12.469180 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3"
I1028 17:56:12.545836 221317 logs.go:123] Gathering logs for kindnet [efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f] ...
I1028 17:56:12.545879 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f"
I1028 17:56:12.592348 221317 logs.go:123] Gathering logs for storage-provisioner [488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8] ...
I1028 17:56:12.592418 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8"
I1028 17:56:15.131724 221317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1028 17:56:15.146711 221317 api_server.go:72] duration metric: took 4m10.744601945s to wait for apiserver process to appear ...
I1028 17:56:15.146738 221317 api_server.go:88] waiting for apiserver healthz status ...
I1028 17:56:15.146781 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 17:56:15.146847 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 17:56:15.185686 221317 cri.go:89] found id: "1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15"
I1028 17:56:15.185724 221317 cri.go:89] found id: "33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d"
I1028 17:56:15.185735 221317 cri.go:89] found id: ""
I1028 17:56:15.185743 221317 logs.go:282] 2 containers: [1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15 33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d]
I1028 17:56:15.185803 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.189957 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.194125 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 17:56:15.194212 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 17:56:15.244702 221317 cri.go:89] found id: "9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a"
I1028 17:56:15.244726 221317 cri.go:89] found id: "eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634"
I1028 17:56:15.244731 221317 cri.go:89] found id: ""
I1028 17:56:15.244738 221317 logs.go:282] 2 containers: [9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634]
I1028 17:56:15.244799 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.249068 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.253570 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 17:56:15.253649 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 17:56:15.295353 221317 cri.go:89] found id: "8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e"
I1028 17:56:15.295374 221317 cri.go:89] found id: "d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4"
I1028 17:56:15.295379 221317 cri.go:89] found id: ""
I1028 17:56:15.295386 221317 logs.go:282] 2 containers: [8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4]
I1028 17:56:15.295447 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.299222 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.302977 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 17:56:15.303045 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 17:56:15.348806 221317 cri.go:89] found id: "696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7"
I1028 17:56:15.348828 221317 cri.go:89] found id: "85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae"
I1028 17:56:15.348835 221317 cri.go:89] found id: ""
I1028 17:56:15.348842 221317 logs.go:282] 2 containers: [696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7 85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae]
I1028 17:56:15.348899 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.352807 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.356417 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 17:56:15.356623 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 17:56:15.405732 221317 cri.go:89] found id: "7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca"
I1028 17:56:15.405756 221317 cri.go:89] found id: "9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990"
I1028 17:56:15.405775 221317 cri.go:89] found id: ""
I1028 17:56:15.405801 221317 logs.go:282] 2 containers: [7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca 9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990]
I1028 17:56:15.405938 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.410291 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.413886 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 17:56:15.413965 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 17:56:15.457597 221317 cri.go:89] found id: "db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3"
I1028 17:56:15.457620 221317 cri.go:89] found id: "a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22"
I1028 17:56:15.457625 221317 cri.go:89] found id: ""
I1028 17:56:15.457633 221317 logs.go:282] 2 containers: [db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3 a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22]
I1028 17:56:15.457692 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.463333 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.467219 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 17:56:15.467292 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 17:56:15.520169 221317 cri.go:89] found id: "c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080"
I1028 17:56:15.520235 221317 cri.go:89] found id: "efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f"
I1028 17:56:15.520253 221317 cri.go:89] found id: ""
I1028 17:56:15.520275 221317 logs.go:282] 2 containers: [c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080 efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f]
I1028 17:56:15.520389 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.524199 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.534812 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 17:56:15.534885 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 17:56:15.573528 221317 cri.go:89] found id: "45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd"
I1028 17:56:15.573551 221317 cri.go:89] found id: ""
I1028 17:56:15.573558 221317 logs.go:282] 1 containers: [45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd]
I1028 17:56:15.573620 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.577319 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 17:56:15.577416 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 17:56:15.613543 221317 cri.go:89] found id: "488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8"
I1028 17:56:15.613565 221317 cri.go:89] found id: "f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345"
I1028 17:56:15.613570 221317 cri.go:89] found id: ""
I1028 17:56:15.613577 221317 logs.go:282] 2 containers: [488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8 f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345]
I1028 17:56:15.613652 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.617193 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:15.620503 221317 logs.go:123] Gathering logs for kube-scheduler [85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae] ...
I1028 17:56:15.620529 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae"
I1028 17:56:15.670160 221317 logs.go:123] Gathering logs for kube-proxy [9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990] ...
I1028 17:56:15.670190 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990"
I1028 17:56:15.708980 221317 logs.go:123] Gathering logs for storage-provisioner [f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345] ...
I1028 17:56:15.709008 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345"
I1028 17:56:15.748641 221317 logs.go:123] Gathering logs for dmesg ...
I1028 17:56:15.748671 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 17:56:15.764000 221317 logs.go:123] Gathering logs for etcd [9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a] ...
I1028 17:56:15.764070 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a"
I1028 17:56:15.815288 221317 logs.go:123] Gathering logs for kubernetes-dashboard [45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd] ...
I1028 17:56:15.815322 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd"
I1028 17:56:15.858741 221317 logs.go:123] Gathering logs for storage-provisioner [488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8] ...
I1028 17:56:15.858775 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8"
I1028 17:56:15.899614 221317 logs.go:123] Gathering logs for containerd ...
I1028 17:56:15.899640 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 17:56:15.965707 221317 logs.go:123] Gathering logs for kubelet ...
I1028 17:56:15.965744 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1028 17:56:16.041682 221317 logs.go:123] Gathering logs for kube-apiserver [1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15] ...
I1028 17:56:16.041719 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15"
I1028 17:56:16.098536 221317 logs.go:123] Gathering logs for kube-controller-manager [db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3] ...
I1028 17:56:16.098581 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3"
I1028 17:56:16.179851 221317 logs.go:123] Gathering logs for kube-controller-manager [a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22] ...
I1028 17:56:16.179883 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22"
I1028 17:56:16.241253 221317 logs.go:123] Gathering logs for kindnet [efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f] ...
I1028 17:56:16.241287 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f"
I1028 17:56:16.279438 221317 logs.go:123] Gathering logs for container status ...
I1028 17:56:16.279474 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 17:56:16.323402 221317 logs.go:123] Gathering logs for describe nodes ...
I1028 17:56:16.323573 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 17:56:16.454817 221317 logs.go:123] Gathering logs for etcd [eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634] ...
I1028 17:56:16.454849 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634"
I1028 17:56:16.507195 221317 logs.go:123] Gathering logs for coredns [d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4] ...
I1028 17:56:16.507227 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4"
I1028 17:56:16.549629 221317 logs.go:123] Gathering logs for kube-scheduler [696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7] ...
I1028 17:56:16.549658 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7"
I1028 17:56:16.605801 221317 logs.go:123] Gathering logs for kube-proxy [7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca] ...
I1028 17:56:16.605914 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca"
I1028 17:56:16.645499 221317 logs.go:123] Gathering logs for kindnet [c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080] ...
I1028 17:56:16.645529 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080"
I1028 17:56:16.686928 221317 logs.go:123] Gathering logs for kube-apiserver [33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d] ...
I1028 17:56:16.686959 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d"
I1028 17:56:16.743498 221317 logs.go:123] Gathering logs for coredns [8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e] ...
I1028 17:56:16.743534 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e"
I1028 17:56:19.288244 221317 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1028 17:56:19.298285 221317 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I1028 17:56:19.299293 221317 api_server.go:141] control plane version: v1.31.2
I1028 17:56:19.299320 221317 api_server.go:131] duration metric: took 4.152575279s to wait for apiserver health ...
I1028 17:56:19.299329 221317 system_pods.go:43] waiting for kube-system pods to appear ...
I1028 17:56:19.299353 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 17:56:19.299423 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 17:56:19.338226 221317 cri.go:89] found id: "1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15"
I1028 17:56:19.338300 221317 cri.go:89] found id: "33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d"
I1028 17:56:19.338317 221317 cri.go:89] found id: ""
I1028 17:56:19.338324 221317 logs.go:282] 2 containers: [1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15 33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d]
I1028 17:56:19.338388 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.342813 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.346754 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 17:56:19.346869 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 17:56:19.388691 221317 cri.go:89] found id: "9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a"
I1028 17:56:19.388718 221317 cri.go:89] found id: "eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634"
I1028 17:56:19.388723 221317 cri.go:89] found id: ""
I1028 17:56:19.388730 221317 logs.go:282] 2 containers: [9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634]
I1028 17:56:19.388807 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.392912 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.396236 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 17:56:19.396332 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 17:56:19.432989 221317 cri.go:89] found id: "8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e"
I1028 17:56:19.433013 221317 cri.go:89] found id: "d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4"
I1028 17:56:19.433018 221317 cri.go:89] found id: ""
I1028 17:56:19.433025 221317 logs.go:282] 2 containers: [8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4]
I1028 17:56:19.433100 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.437132 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.440861 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 17:56:19.440965 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 17:56:19.488803 221317 cri.go:89] found id: "696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7"
I1028 17:56:19.488831 221317 cri.go:89] found id: "85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae"
I1028 17:56:19.488837 221317 cri.go:89] found id: ""
I1028 17:56:19.488844 221317 logs.go:282] 2 containers: [696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7 85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae]
I1028 17:56:19.488992 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.493173 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.496821 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 17:56:19.496963 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 17:56:19.544983 221317 cri.go:89] found id: "7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca"
I1028 17:56:19.545004 221317 cri.go:89] found id: "9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990"
I1028 17:56:19.545008 221317 cri.go:89] found id: ""
I1028 17:56:19.545016 221317 logs.go:282] 2 containers: [7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca 9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990]
I1028 17:56:19.545072 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.548995 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.552502 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 17:56:19.552618 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 17:56:19.592862 221317 cri.go:89] found id: "db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3"
I1028 17:56:19.592892 221317 cri.go:89] found id: "a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22"
I1028 17:56:19.592898 221317 cri.go:89] found id: ""
I1028 17:56:19.592905 221317 logs.go:282] 2 containers: [db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3 a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22]
I1028 17:56:19.592972 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.597121 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.601181 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 17:56:19.601271 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 17:56:19.641868 221317 cri.go:89] found id: "c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080"
I1028 17:56:19.641894 221317 cri.go:89] found id: "efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f"
I1028 17:56:19.641901 221317 cri.go:89] found id: ""
I1028 17:56:19.641908 221317 logs.go:282] 2 containers: [c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080 efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f]
I1028 17:56:19.641991 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.645864 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.650225 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 17:56:19.650327 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 17:56:19.689198 221317 cri.go:89] found id: "45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd"
I1028 17:56:19.689220 221317 cri.go:89] found id: ""
I1028 17:56:19.689227 221317 logs.go:282] 1 containers: [45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd]
I1028 17:56:19.689282 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.692832 221317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 17:56:19.692901 221317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 17:56:19.730019 221317 cri.go:89] found id: "488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8"
I1028 17:56:19.730043 221317 cri.go:89] found id: "f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345"
I1028 17:56:19.730048 221317 cri.go:89] found id: ""
I1028 17:56:19.730056 221317 logs.go:282] 2 containers: [488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8 f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345]
I1028 17:56:19.730134 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.734594 221317 ssh_runner.go:195] Run: which crictl
I1028 17:56:19.738486 221317 logs.go:123] Gathering logs for kube-controller-manager [a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22] ...
I1028 17:56:19.738520 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3e7fc24d0a6b0dfe25a29286d248f9fb8e6fa1b207e811a3b6aee7f6dc25a22"
I1028 17:56:19.795921 221317 logs.go:123] Gathering logs for storage-provisioner [f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345] ...
I1028 17:56:19.795956 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3ce6939c594611802dcee78331360a0842815b0456259a4a45448e6873e9345"
I1028 17:56:19.841179 221317 logs.go:123] Gathering logs for describe nodes ...
I1028 17:56:19.841208 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 17:56:19.978274 221317 logs.go:123] Gathering logs for coredns [d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4] ...
I1028 17:56:19.978303 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d63def5be7d9e46a5562641c82d380a0d5abcc66ad423b6d8c251ed6a42c08f4"
I1028 17:56:20.028526 221317 logs.go:123] Gathering logs for kube-scheduler [696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7] ...
I1028 17:56:20.028628 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 696d41a68acec693cbcfc9f87a485eed897d5ef2f362be96ae60da3f9c7d23b7"
I1028 17:56:20.077017 221317 logs.go:123] Gathering logs for kube-scheduler [85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae] ...
I1028 17:56:20.077047 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85bfc5a4f8f33d813eb912381df4cd619f7a3fa3b368a54a796edf28aca5feae"
I1028 17:56:20.126316 221317 logs.go:123] Gathering logs for kubernetes-dashboard [45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd] ...
I1028 17:56:20.126357 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45d7397effb45132bfd87686b4f1a1cdc88cbeec8ab86fd7b747f45060f5f2cd"
I1028 17:56:20.177609 221317 logs.go:123] Gathering logs for kubelet ...
I1028 17:56:20.177642 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1028 17:56:20.253987 221317 logs.go:123] Gathering logs for kube-apiserver [1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15] ...
I1028 17:56:20.254023 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e1e5cf861857aa712a7ab10a080e52f99bd5985ec00dc21dd9f2a9205855d15"
I1028 17:56:20.307316 221317 logs.go:123] Gathering logs for etcd [9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a] ...
I1028 17:56:20.307346 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fcea94e303185b679ba542a23301f3dc3318928d26cdded5dd3c30c9153ac0a"
I1028 17:56:20.356355 221317 logs.go:123] Gathering logs for coredns [8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e] ...
I1028 17:56:20.356385 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d408e353ef4a8817b92d8b0db58930bbcaeb14974e7659a307541824bfe848e"
I1028 17:56:20.399016 221317 logs.go:123] Gathering logs for kindnet [efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f] ...
I1028 17:56:20.399050 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efad0295ee7e1fe3c973ea4a3f7a98c72f3d0be5b3d0dee42f0c78f53b76db9f"
I1028 17:56:20.438332 221317 logs.go:123] Gathering logs for storage-provisioner [488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8] ...
I1028 17:56:20.438361 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488ca23a69b452d0610ad06bc99e9395a13caf8dcfce88e7d09aae9f9de5cdf8"
I1028 17:56:20.483352 221317 logs.go:123] Gathering logs for containerd ...
I1028 17:56:20.483379 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 17:56:20.554278 221317 logs.go:123] Gathering logs for etcd [eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634] ...
I1028 17:56:20.554316 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eae4ed2392aa058002dcbd6b4727d9eda2be826e9d365b59dcbaeda29b58b634"
I1028 17:56:20.601434 221317 logs.go:123] Gathering logs for kube-proxy [9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990] ...
I1028 17:56:20.601476 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e147e3805aebde63674e97b61d042ee18d5ee2dcef84c36df4f849be35b1990"
I1028 17:56:20.655227 221317 logs.go:123] Gathering logs for kube-controller-manager [db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3] ...
I1028 17:56:20.655257 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0e6263ce4a2ff18933003f84190449d588b2ca7534114c2d0420303a9ad8e3"
I1028 17:56:20.729038 221317 logs.go:123] Gathering logs for kindnet [c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080] ...
I1028 17:56:20.729071 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c68c99fda2518ff8fdb5438a903a07017af2d8a0eabc9b35d9108e9bacdac080"
I1028 17:56:20.774072 221317 logs.go:123] Gathering logs for dmesg ...
I1028 17:56:20.774105 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 17:56:20.791187 221317 logs.go:123] Gathering logs for kube-apiserver [33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d] ...
I1028 17:56:20.791215 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33189239168a6db46cf78a957ecd5c5a6393840a6cf3c9da6c554d5ed902724d"
I1028 17:56:20.843525 221317 logs.go:123] Gathering logs for kube-proxy [7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca] ...
I1028 17:56:20.843560 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f02a5ca6f52f299154855b60bcfafa9536ede56364f84c6e92b77d0e7a87fca"
I1028 17:56:20.889321 221317 logs.go:123] Gathering logs for container status ...
I1028 17:56:20.889347 221317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 17:56:23.443839 221317 system_pods.go:59] 9 kube-system pods found
I1028 17:56:23.443879 221317 system_pods.go:61] "coredns-7c65d6cfc9-62xsk" [fda82360-76f5-4f74-8d3c-e46236199c00] Running
I1028 17:56:23.443886 221317 system_pods.go:61] "etcd-no-preload-671620" [241cfc7c-fa77-4bc6-9847-0b18ac368b31] Running
I1028 17:56:23.443891 221317 system_pods.go:61] "kindnet-9ngxf" [7a0b8845-43be-440e-b579-6fab8913eb5e] Running
I1028 17:56:23.443895 221317 system_pods.go:61] "kube-apiserver-no-preload-671620" [4c850cb9-433e-4c98-a887-98a3faa61c3e] Running
I1028 17:56:23.443899 221317 system_pods.go:61] "kube-controller-manager-no-preload-671620" [f20f2162-cbf6-4822-a65a-b3ac5e439e3a] Running
I1028 17:56:23.443903 221317 system_pods.go:61] "kube-proxy-2nnd8" [a42f7e2c-c8cc-45bc-97b5-436a3bbea6f5] Running
I1028 17:56:23.443945 221317 system_pods.go:61] "kube-scheduler-no-preload-671620" [3d0a324d-485f-4e1d-b9af-e2aad21b2db8] Running
I1028 17:56:23.443962 221317 system_pods.go:61] "metrics-server-6867b74b74-5xptn" [944badca-1aa9-4766-b848-adb80cf31ea7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1028 17:56:23.443968 221317 system_pods.go:61] "storage-provisioner" [2e223948-f9a1-45d2-9d5c-5249cd7c994e] Running
I1028 17:56:23.443976 221317 system_pods.go:74] duration metric: took 4.144640273s to wait for pod list to return data ...
I1028 17:56:23.443984 221317 default_sa.go:34] waiting for default service account to be created ...
I1028 17:56:23.447077 221317 default_sa.go:45] found service account: "default"
I1028 17:56:23.447104 221317 default_sa.go:55] duration metric: took 3.114351ms for default service account to be created ...
I1028 17:56:23.447114 221317 system_pods.go:116] waiting for k8s-apps to be running ...
I1028 17:56:23.453733 221317 system_pods.go:86] 9 kube-system pods found
I1028 17:56:23.453767 221317 system_pods.go:89] "coredns-7c65d6cfc9-62xsk" [fda82360-76f5-4f74-8d3c-e46236199c00] Running
I1028 17:56:23.453776 221317 system_pods.go:89] "etcd-no-preload-671620" [241cfc7c-fa77-4bc6-9847-0b18ac368b31] Running
I1028 17:56:23.453781 221317 system_pods.go:89] "kindnet-9ngxf" [7a0b8845-43be-440e-b579-6fab8913eb5e] Running
I1028 17:56:23.453786 221317 system_pods.go:89] "kube-apiserver-no-preload-671620" [4c850cb9-433e-4c98-a887-98a3faa61c3e] Running
I1028 17:56:23.453792 221317 system_pods.go:89] "kube-controller-manager-no-preload-671620" [f20f2162-cbf6-4822-a65a-b3ac5e439e3a] Running
I1028 17:56:23.453796 221317 system_pods.go:89] "kube-proxy-2nnd8" [a42f7e2c-c8cc-45bc-97b5-436a3bbea6f5] Running
I1028 17:56:23.453800 221317 system_pods.go:89] "kube-scheduler-no-preload-671620" [3d0a324d-485f-4e1d-b9af-e2aad21b2db8] Running
I1028 17:56:23.453808 221317 system_pods.go:89] "metrics-server-6867b74b74-5xptn" [944badca-1aa9-4766-b848-adb80cf31ea7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1028 17:56:23.453818 221317 system_pods.go:89] "storage-provisioner" [2e223948-f9a1-45d2-9d5c-5249cd7c994e] Running
I1028 17:56:23.453826 221317 system_pods.go:126] duration metric: took 6.706234ms to wait for k8s-apps to be running ...
I1028 17:56:23.453841 221317 system_svc.go:44] waiting for kubelet service to be running ....
I1028 17:56:23.453898 221317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1028 17:56:23.465780 221317 system_svc.go:56] duration metric: took 11.929282ms WaitForService to wait for kubelet
I1028 17:56:23.465807 221317 kubeadm.go:582] duration metric: took 4m19.06370318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1028 17:56:23.465827 221317 node_conditions.go:102] verifying NodePressure condition ...
I1028 17:56:23.469010 221317 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1028 17:56:23.469045 221317 node_conditions.go:123] node cpu capacity is 2
I1028 17:56:23.469058 221317 node_conditions.go:105] duration metric: took 3.225381ms to run NodePressure ...
I1028 17:56:23.469070 221317 start.go:241] waiting for startup goroutines ...
I1028 17:56:23.469077 221317 start.go:246] waiting for cluster config update ...
I1028 17:56:23.469088 221317 start.go:255] writing updated cluster config ...
I1028 17:56:23.469393 221317 ssh_runner.go:195] Run: rm -f paused
I1028 17:56:23.536044 221317 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
I1028 17:56:23.539095 221317 out.go:177] * Done! kubectl is now configured to use "no-preload-671620" cluster and "default" namespace by default
I1028 17:56:21.411726 215146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1028 17:56:21.424434 215146 api_server.go:72] duration metric: took 5m53.301552069s to wait for apiserver process to appear ...
I1028 17:56:21.424458 215146 api_server.go:88] waiting for apiserver healthz status ...
I1028 17:56:21.424495 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 17:56:21.424586 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 17:56:21.462686 215146 cri.go:89] found id: "be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6"
I1028 17:56:21.462708 215146 cri.go:89] found id: "c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:56:21.462713 215146 cri.go:89] found id: ""
I1028 17:56:21.462721 215146 logs.go:282] 2 containers: [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6 c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2]
I1028 17:56:21.462783 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.466844 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.477116 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 17:56:21.477193 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 17:56:21.528404 215146 cri.go:89] found id: "ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80"
I1028 17:56:21.528424 215146 cri.go:89] found id: "48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:56:21.528429 215146 cri.go:89] found id: ""
I1028 17:56:21.528436 215146 logs.go:282] 2 containers: [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80 48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420]
I1028 17:56:21.528490 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.532483 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.536429 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 17:56:21.536503 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 17:56:21.577695 215146 cri.go:89] found id: "0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522"
I1028 17:56:21.577724 215146 cri.go:89] found id: "697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:56:21.577729 215146 cri.go:89] found id: ""
I1028 17:56:21.577737 215146 logs.go:282] 2 containers: [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522 697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e]
I1028 17:56:21.577814 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.581875 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.585396 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 17:56:21.585469 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 17:56:21.626201 215146 cri.go:89] found id: "0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca"
I1028 17:56:21.626225 215146 cri.go:89] found id: "e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:56:21.626230 215146 cri.go:89] found id: ""
I1028 17:56:21.626237 215146 logs.go:282] 2 containers: [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98]
I1028 17:56:21.626295 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.629990 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.633594 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 17:56:21.633663 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 17:56:21.676659 215146 cri.go:89] found id: "251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b"
I1028 17:56:21.676682 215146 cri.go:89] found id: "f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:56:21.676687 215146 cri.go:89] found id: ""
I1028 17:56:21.676694 215146 logs.go:282] 2 containers: [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1]
I1028 17:56:21.676753 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.681207 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.684753 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 17:56:21.684826 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 17:56:21.758221 215146 cri.go:89] found id: "45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6"
I1028 17:56:21.758242 215146 cri.go:89] found id: "07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:56:21.758247 215146 cri.go:89] found id: ""
I1028 17:56:21.758254 215146 logs.go:282] 2 containers: [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6 07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45]
I1028 17:56:21.758309 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.762388 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.765992 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 17:56:21.766050 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 17:56:21.815014 215146 cri.go:89] found id: "281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572"
I1028 17:56:21.815040 215146 cri.go:89] found id: "eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:56:21.815045 215146 cri.go:89] found id: ""
I1028 17:56:21.815052 215146 logs.go:282] 2 containers: [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572 eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d]
I1028 17:56:21.815108 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.818748 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.822240 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 17:56:21.822340 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 17:56:21.865838 215146 cri.go:89] found id: "616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245"
I1028 17:56:21.865901 215146 cri.go:89] found id: ""
I1028 17:56:21.865916 215146 logs.go:282] 1 containers: [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245]
I1028 17:56:21.865971 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.870830 215146 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 17:56:21.870949 215146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 17:56:21.910658 215146 cri.go:89] found id: "1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220"
I1028 17:56:21.910685 215146 cri.go:89] found id: "7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78"
I1028 17:56:21.910689 215146 cri.go:89] found id: ""
I1028 17:56:21.910699 215146 logs.go:282] 2 containers: [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220 7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78]
I1028 17:56:21.910774 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.914374 215146 ssh_runner.go:195] Run: which crictl
I1028 17:56:21.917740 215146 logs.go:123] Gathering logs for kube-scheduler [e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98] ...
I1028 17:56:21.917766 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98"
I1028 17:56:21.958055 215146 logs.go:123] Gathering logs for kubernetes-dashboard [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245] ...
I1028 17:56:21.958089 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245"
I1028 17:56:22.000135 215146 logs.go:123] Gathering logs for storage-provisioner [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220] ...
I1028 17:56:22.000162 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220"
I1028 17:56:22.043187 215146 logs.go:123] Gathering logs for kubelet ...
I1028 17:56:22.043217 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1028 17:56:22.097360 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.823989 666 reflector.go:138] object-"kube-system"/"metrics-server-token-s8546": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-s8546" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.097593 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.825360 666 reflector.go:138] object-"kube-system"/"kindnet-token-rzkm7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rzkm7" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.097805 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.826686 666 reflector.go:138] object-"kube-system"/"coredns-token-bhrx5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-bhrx5" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.098006 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.827119 666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.098222 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.827457 666 reflector.go:138] object-"kube-system"/"kube-proxy-token-j55lg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j55lg" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.100682 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862535 666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-42rw4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-42rw4" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.100895 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862690 666 reflector.go:138] object-"default"/"default-token-8g42r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8g42r" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.102299 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:48 old-k8s-version-743648 kubelet[666]: E1028 17:50:48.862027 666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-743648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-743648' and this object
W1028 17:56:22.110663 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:52 old-k8s-version-743648 kubelet[666]: E1028 17:50:52.034873 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.110853 215146 logs.go:138] Found kubelet problem: Oct 28 17:50:52 old-k8s-version-743648 kubelet[666]: E1028 17:50:52.426336 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.113643 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:04 old-k8s-version-743648 kubelet[666]: E1028 17:51:04.181734 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.115310 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:15 old-k8s-version-743648 kubelet[666]: E1028 17:51:15.145992 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.116227 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:20 old-k8s-version-743648 kubelet[666]: E1028 17:51:20.589441 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.116583 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:21 old-k8s-version-743648 kubelet[666]: E1028 17:51:21.593639 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.117028 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:23 old-k8s-version-743648 kubelet[666]: E1028 17:51:23.609906 666 pod_workers.go:191] Error syncing pod 8ffc3abd-c784-474f-80a5-f6a8b25abc51 ("storage-provisioner_kube-system(8ffc3abd-c784-474f-80a5-f6a8b25abc51)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8ffc3abd-c784-474f-80a5-f6a8b25abc51)"
W1028 17:56:22.117353 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:25 old-k8s-version-743648 kubelet[666]: E1028 17:51:25.775266 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.120160 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:29 old-k8s-version-743648 kubelet[666]: E1028 17:51:29.147486 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.120698 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:41 old-k8s-version-743648 kubelet[666]: E1028 17:51:41.138852 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.121161 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:41 old-k8s-version-743648 kubelet[666]: E1028 17:51:41.660884 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.121488 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:45 old-k8s-version-743648 kubelet[666]: E1028 17:51:45.774632 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.121670 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:52 old-k8s-version-743648 kubelet[666]: E1028 17:51:52.138242 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.121998 215146 logs.go:138] Found kubelet problem: Oct 28 17:51:59 old-k8s-version-743648 kubelet[666]: E1028 17:51:59.137591 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.122182 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:05 old-k8s-version-743648 kubelet[666]: E1028 17:52:05.138208 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.122765 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:14 old-k8s-version-743648 kubelet[666]: E1028 17:52:14.756224 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.123092 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:15 old-k8s-version-743648 kubelet[666]: E1028 17:52:15.774720 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.125572 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:17 old-k8s-version-743648 kubelet[666]: E1028 17:52:17.148693 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.125906 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:28 old-k8s-version-743648 kubelet[666]: E1028 17:52:28.141800 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.126094 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:32 old-k8s-version-743648 kubelet[666]: E1028 17:52:32.138343 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.126423 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:40 old-k8s-version-743648 kubelet[666]: E1028 17:52:40.138205 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.126605 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:44 old-k8s-version-743648 kubelet[666]: E1028 17:52:44.140775 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.127190 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:55 old-k8s-version-743648 kubelet[666]: E1028 17:52:55.867579 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.127374 215146 logs.go:138] Found kubelet problem: Oct 28 17:52:58 old-k8s-version-743648 kubelet[666]: E1028 17:52:58.138213 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.127698 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:05 old-k8s-version-743648 kubelet[666]: E1028 17:53:05.775406 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.127885 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:13 old-k8s-version-743648 kubelet[666]: E1028 17:53:13.138098 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.128213 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:17 old-k8s-version-743648 kubelet[666]: E1028 17:53:17.137660 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.128398 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:25 old-k8s-version-743648 kubelet[666]: E1028 17:53:25.138080 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.128741 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:28 old-k8s-version-743648 kubelet[666]: E1028 17:53:28.141572 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.131182 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:39 old-k8s-version-743648 kubelet[666]: E1028 17:53:39.147007 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 17:56:22.131510 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:43 old-k8s-version-743648 kubelet[666]: E1028 17:53:43.137608 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.131694 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:52 old-k8s-version-743648 kubelet[666]: E1028 17:53:52.138963 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.132018 215146 logs.go:138] Found kubelet problem: Oct 28 17:53:57 old-k8s-version-743648 kubelet[666]: E1028 17:53:57.137666 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.132200 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:03 old-k8s-version-743648 kubelet[666]: E1028 17:54:03.139810 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.132523 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:08 old-k8s-version-743648 kubelet[666]: E1028 17:54:08.137748 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.132714 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:15 old-k8s-version-743648 kubelet[666]: E1028 17:54:15.138210 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.133317 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:22 old-k8s-version-743648 kubelet[666]: E1028 17:54:22.110858 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.133641 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:25 old-k8s-version-743648 kubelet[666]: E1028 17:54:25.774510 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.133824 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:26 old-k8s-version-743648 kubelet[666]: E1028 17:54:26.142477 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.134153 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:36 old-k8s-version-743648 kubelet[666]: E1028 17:54:36.140263 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.134335 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:40 old-k8s-version-743648 kubelet[666]: E1028 17:54:40.138696 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.134661 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:51 old-k8s-version-743648 kubelet[666]: E1028 17:54:51.137643 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.134843 215146 logs.go:138] Found kubelet problem: Oct 28 17:54:54 old-k8s-version-743648 kubelet[666]: E1028 17:54:54.140745 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.135171 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:05 old-k8s-version-743648 kubelet[666]: E1028 17:55:05.137675 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.135355 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:06 old-k8s-version-743648 kubelet[666]: E1028 17:55:06.141555 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.135679 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.142304 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.135860 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.144881 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.136044 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.145457 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.136368 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.148252 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.136701 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.143236 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.136883 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.137231 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.137419 215146 logs.go:138] Found kubelet problem: Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:22.137747 215146 logs.go:138] Found kubelet problem: Oct 28 17:56:11 old-k8s-version-743648 kubelet[666]: E1028 17:56:11.140245 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:22.137930 215146 logs.go:138] Found kubelet problem: Oct 28 17:56:13 old-k8s-version-743648 kubelet[666]: E1028 17:56:13.138081 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1028 17:56:22.137946 215146 logs.go:123] Gathering logs for dmesg ...
I1028 17:56:22.137960 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 17:56:22.158369 215146 logs.go:123] Gathering logs for kube-apiserver [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6] ...
I1028 17:56:22.158447 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6"
I1028 17:56:22.213784 215146 logs.go:123] Gathering logs for kube-apiserver [c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2] ...
I1028 17:56:22.213817 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2"
I1028 17:56:22.279566 215146 logs.go:123] Gathering logs for coredns [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522] ...
I1028 17:56:22.279609 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522"
I1028 17:56:22.320096 215146 logs.go:123] Gathering logs for storage-provisioner [7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78] ...
I1028 17:56:22.320123 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78"
I1028 17:56:22.366694 215146 logs.go:123] Gathering logs for kindnet [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572] ...
I1028 17:56:22.366732 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572"
I1028 17:56:22.417133 215146 logs.go:123] Gathering logs for describe nodes ...
I1028 17:56:22.417163 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 17:56:22.575425 215146 logs.go:123] Gathering logs for coredns [697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e] ...
I1028 17:56:22.575457 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e"
I1028 17:56:22.613582 215146 logs.go:123] Gathering logs for kube-proxy [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b] ...
I1028 17:56:22.613610 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b"
I1028 17:56:22.660854 215146 logs.go:123] Gathering logs for kube-controller-manager [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6] ...
I1028 17:56:22.660882 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6"
I1028 17:56:22.717997 215146 logs.go:123] Gathering logs for kube-controller-manager [07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45] ...
I1028 17:56:22.718031 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45"
I1028 17:56:22.775654 215146 logs.go:123] Gathering logs for etcd [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80] ...
I1028 17:56:22.775689 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80"
I1028 17:56:22.829841 215146 logs.go:123] Gathering logs for etcd [48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420] ...
I1028 17:56:22.829880 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420"
I1028 17:56:22.873201 215146 logs.go:123] Gathering logs for kube-proxy [f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1] ...
I1028 17:56:22.873231 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1"
I1028 17:56:22.917373 215146 logs.go:123] Gathering logs for container status ...
I1028 17:56:22.917399 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 17:56:22.966858 215146 logs.go:123] Gathering logs for kube-scheduler [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca] ...
I1028 17:56:22.966891 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca"
I1028 17:56:23.010694 215146 logs.go:123] Gathering logs for kindnet [eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d] ...
I1028 17:56:23.010731 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d"
I1028 17:56:23.069106 215146 logs.go:123] Gathering logs for containerd ...
I1028 17:56:23.069137 215146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 17:56:23.132899 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:56:23.132932 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1028 17:56:23.133001 215146 out.go:270] X Problems detected in kubelet:
W1028 17:56:23.133017 215146 out.go:270] Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:23.133051 215146 out.go:270] Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:23.133060 215146 out.go:270] Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 17:56:23.133067 215146 out.go:270] Oct 28 17:56:11 old-k8s-version-743648 kubelet[666]: E1028 17:56:11.140245 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
W1028 17:56:23.133073 215146 out.go:270] Oct 28 17:56:13 old-k8s-version-743648 kubelet[666]: E1028 17:56:13.138081 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1028 17:56:23.133082 215146 out.go:358] Setting ErrFile to fd 2...
I1028 17:56:23.133089 215146 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 17:56:33.134758 215146 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1028 17:56:33.146466 215146 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1028 17:56:33.148907 215146 out.go:201]
W1028 17:56:33.150944 215146 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1028 17:56:33.150988 215146 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W1028 17:56:33.151008 215146 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W1028 17:56:33.151018 215146 out.go:270] *
W1028 17:56:33.151915 215146 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 17:56:33.154167 215146 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
97a314c29d1a7 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 06ed3d331b5d6 dashboard-metrics-scraper-8d5bb5db8-jx67q
1aef6c7a01064 ba04bb24b9575 4 minutes ago Running storage-provisioner 2 7eb25643f176f storage-provisioner
616276ec12d45 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 aac3897da3c92 kubernetes-dashboard-cd95d586-k2vc6
251d74de7c72a 25a5233254979 5 minutes ago Running kube-proxy 1 d4a324f2431ae kube-proxy-zzhjw
2db1c67b9f816 1611cd07b61d5 5 minutes ago Running busybox 1 773961302937b busybox
0e45eb02d6ef2 db91994f4ee8f 5 minutes ago Running coredns 1 817eccfb2e131 coredns-74ff55c5b-97b6q
7dafca8166996 ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 7eb25643f176f storage-provisioner
281df8d7ccec7 0bcd66b03df5f 5 minutes ago Running kindnet-cni 1 a63b55d8dd3ce kindnet-dzxmm
45d2c62bb3793 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 78a459fa030a6 kube-controller-manager-old-k8s-version-743648
be449cd4e4ecc 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 5e7d27c67de9a kube-apiserver-old-k8s-version-743648
ef0e952b69c48 05b738aa1bc63 5 minutes ago Running etcd 1 61926a97b89c1 etcd-old-k8s-version-743648
0f6d7fac26d93 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 805065fb9e562 kube-scheduler-old-k8s-version-743648
c8b4ccecdd3e1 1611cd07b61d5 6 minutes ago Exited busybox 0 ed7e59b9876b7 busybox
697a2238311c1 db91994f4ee8f 7 minutes ago Exited coredns 0 e06d960001c29 coredns-74ff55c5b-97b6q
eef038eb4862c 0bcd66b03df5f 8 minutes ago Exited kindnet-cni 0 d226fda483eb3 kindnet-dzxmm
f8257840a24c6 25a5233254979 8 minutes ago Exited kube-proxy 0 e246b9714ae6e kube-proxy-zzhjw
48a3894d832c5 05b738aa1bc63 8 minutes ago Exited etcd 0 aa519b5e3370b etcd-old-k8s-version-743648
e659d3edb50ed e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 174d71dab1973 kube-scheduler-old-k8s-version-743648
07fd4f4cf88e3 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 b213618d19880 kube-controller-manager-old-k8s-version-743648
c2a071551dfcc 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 f0d2a2848a7ba kube-apiserver-old-k8s-version-743648
==> containerd <==
Oct 28 17:52:55 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:52:55.183027902Z" level=info msg="CreateContainer within sandbox \"06ed3d331b5d617b962e3d7de3eda2fd506bb6545a42d0fadfd021e1f5015d43\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"3bda48baa7560e5104b44770d96516c65c8e186c85ef7aed71f630710ed775c0\""
Oct 28 17:52:55 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:52:55.184002886Z" level=info msg="StartContainer for \"3bda48baa7560e5104b44770d96516c65c8e186c85ef7aed71f630710ed775c0\""
Oct 28 17:52:55 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:52:55.265876987Z" level=info msg="StartContainer for \"3bda48baa7560e5104b44770d96516c65c8e186c85ef7aed71f630710ed775c0\" returns successfully"
Oct 28 17:52:55 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:52:55.300236928Z" level=info msg="shim disconnected" id=3bda48baa7560e5104b44770d96516c65c8e186c85ef7aed71f630710ed775c0 namespace=k8s.io
Oct 28 17:52:55 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:52:55.300310650Z" level=warning msg="cleaning up after shim disconnected" id=3bda48baa7560e5104b44770d96516c65c8e186c85ef7aed71f630710ed775c0 namespace=k8s.io
Oct 28 17:52:55 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:52:55.300328881Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 28 17:52:55 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:52:55.872193804Z" level=info msg="RemoveContainer for \"a6758e5ea5ab8a3b92ae51b01fcce378fd3fb591d0be0af560c4ae6f86e03708\""
Oct 28 17:52:55 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:52:55.878356323Z" level=info msg="RemoveContainer for \"a6758e5ea5ab8a3b92ae51b01fcce378fd3fb591d0be0af560c4ae6f86e03708\" returns successfully"
Oct 28 17:53:39 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:53:39.138978872Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:53:39 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:53:39.144784383Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Oct 28 17:53:39 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:53:39.146518413Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Oct 28 17:53:39 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:53:39.146652065Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Oct 28 17:54:21 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:21.139884528Z" level=info msg="CreateContainer within sandbox \"06ed3d331b5d617b962e3d7de3eda2fd506bb6545a42d0fadfd021e1f5015d43\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Oct 28 17:54:21 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:21.156281312Z" level=info msg="CreateContainer within sandbox \"06ed3d331b5d617b962e3d7de3eda2fd506bb6545a42d0fadfd021e1f5015d43\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f\""
Oct 28 17:54:21 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:21.157091023Z" level=info msg="StartContainer for \"97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f\""
Oct 28 17:54:21 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:21.220734745Z" level=info msg="StartContainer for \"97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f\" returns successfully"
Oct 28 17:54:21 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:21.255515271Z" level=info msg="shim disconnected" id=97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f namespace=k8s.io
Oct 28 17:54:21 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:21.255586244Z" level=warning msg="cleaning up after shim disconnected" id=97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f namespace=k8s.io
Oct 28 17:54:21 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:21.255595778Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 28 17:54:22 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:22.112781372Z" level=info msg="RemoveContainer for \"3bda48baa7560e5104b44770d96516c65c8e186c85ef7aed71f630710ed775c0\""
Oct 28 17:54:22 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:54:22.126976851Z" level=info msg="RemoveContainer for \"3bda48baa7560e5104b44770d96516c65c8e186c85ef7aed71f630710ed775c0\" returns successfully"
Oct 28 17:56:25 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:56:25.139032795Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:56:25 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:56:25.155819968Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Oct 28 17:56:25 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:56:25.157924085Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Oct 28 17:56:25 old-k8s-version-743648 containerd[571]: time="2024-10-28T17:56:25.157948897Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [0e45eb02d6ef238469955efda1ee7684d106fb78843e77c663a6530e9a4ce522] <==
I1028 17:51:22.780161 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 17:50:52.779480133 +0000 UTC m=+0.070982856) (total time: 30.000503609s):
Trace[2019727887]: [30.000503609s] [30.000503609s] END
E1028 17:51:22.780493 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I1028 17:51:22.780515 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 17:50:52.779950804 +0000 UTC m=+0.071453528) (total time: 30.000553987s):
Trace[939984059]: [30.000553987s] [30.000553987s] END
E1028 17:51:22.780786 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I1028 17:51:22.780432 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 17:50:52.780154453 +0000 UTC m=+0.071657176) (total time: 30.000259592s):
Trace[1427131847]: [30.000259592s] [30.000259592s] END
E1028 17:51:22.780989 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:35993 - 60538 "HINFO IN 6197028704380980754.310336088400240551. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024373997s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
==> coredns [697a2238311c1986504052595cca9231d4b6ae089cc7a1e5ea057443459f150e] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:46363 - 7131 "HINFO IN 4792780249260431917.4391253746878207862. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006161507s
==> describe nodes <==
Name: old-k8s-version-743648
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-743648
kubernetes.io/os=linux
minikube.k8s.io/commit=eb485f4d2e746257bc08ad8e2f39a06044008ba8
minikube.k8s.io/name=old-k8s-version-743648
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_10_28T17_48_13_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 28 Oct 2024 17:48:09 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-743648
AcquireTime: <unset>
RenewTime: Mon, 28 Oct 2024 17:56:31 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 28 Oct 2024 17:51:39 +0000 Mon, 28 Oct 2024 17:48:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 28 Oct 2024 17:51:39 +0000 Mon, 28 Oct 2024 17:48:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 28 Oct 2024 17:51:39 +0000 Mon, 28 Oct 2024 17:48:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 28 Oct 2024 17:51:39 +0000 Mon, 28 Oct 2024 17:50:49 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-743648
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: e6b796c4563046fd97f17ef541c604e7
System UUID: b6d54d77-f5c8-48c8-a0c4-ee74c7ae873a
Boot ID: a43c4fe7-73f5-4eea-81c4-9720365cd829
Kernel Version: 5.15.0-1071-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m38s
kube-system coredns-74ff55c5b-97b6q 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m7s
kube-system etcd-old-k8s-version-743648 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m13s
kube-system kindnet-dzxmm 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m7s
kube-system kube-apiserver-old-k8s-version-743648 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m13s
kube-system kube-controller-manager-old-k8s-version-743648 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m13s
kube-system kube-proxy-zzhjw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m7s
kube-system kube-scheduler-old-k8s-version-743648 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m13s
kube-system metrics-server-9975d5f86-4qgm8 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m27s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m5s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-jx67q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m25s
kubernetes-dashboard kubernetes-dashboard-cd95d586-k2vc6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m25s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m33s (x5 over 8m33s) kubelet Node old-k8s-version-743648 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m33s (x5 over 8m33s) kubelet Node old-k8s-version-743648 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m33s (x4 over 8m33s) kubelet Node old-k8s-version-743648 status is now: NodeHasSufficientPID
Normal Starting 8m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m14s kubelet Node old-k8s-version-743648 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m14s kubelet Node old-k8s-version-743648 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m14s kubelet Node old-k8s-version-743648 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m13s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m7s kubelet Node old-k8s-version-743648 status is now: NodeReady
Normal Starting 8m5s kube-proxy Starting kube-proxy.
Normal Starting 5m58s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m58s (x8 over 5m58s) kubelet Node old-k8s-version-743648 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m58s (x8 over 5m58s) kubelet Node old-k8s-version-743648 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m58s (x7 over 5m58s) kubelet Node old-k8s-version-743648 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m58s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m41s kube-proxy Starting kube-proxy.
==> dmesg <==
[Oct28 16:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014925] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.483288] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.028373] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.032276] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.019435] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.671496] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.413135] kauditd_printk_skb: 34 callbacks suppressed
[Oct28 17:17] hrtimer: interrupt took 9249663 ns
[Oct28 17:40] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
==> etcd [48a3894d832c52de4abeaef07543d9a0e58a0ab2dc79dc5335df2fe638fa8420] <==
raft2024/10/28 17:48:03 INFO: ea7e25599daad906 is starting a new election at term 1
raft2024/10/28 17:48:03 INFO: ea7e25599daad906 became candidate at term 2
raft2024/10/28 17:48:03 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2024/10/28 17:48:03 INFO: ea7e25599daad906 became leader at term 2
raft2024/10/28 17:48:03 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2024-10-28 17:48:03.527351 I | etcdserver: setting up the initial cluster version to 3.4
2024-10-28 17:48:03.529839 N | etcdserver/membership: set the initial cluster version to 3.4
2024-10-28 17:48:03.530252 I | etcdserver/api: enabled capabilities for version 3.4
2024-10-28 17:48:03.530376 I | etcdserver: published {Name:old-k8s-version-743648 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2024-10-28 17:48:03.530470 I | embed: ready to serve client requests
2024-10-28 17:48:03.532204 I | embed: serving client requests on 127.0.0.1:2379
2024-10-28 17:48:03.532500 I | embed: ready to serve client requests
2024-10-28 17:48:03.539573 I | embed: serving client requests on 192.168.76.2:2379
2024-10-28 17:48:21.777795 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:48:27.005759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:48:37.009525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:48:47.006660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:48:57.007130 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:49:07.007511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:49:17.006130 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:49:27.006506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:49:37.007874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:49:47.008087 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:49:57.008033 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:50:07.014372 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [ef0e952b69c48532fae761ffded3c4007687b28e682a598ada510d32c607af80] <==
2024-10-28 17:52:33.330698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:52:43.330777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:52:53.330641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:53:03.330786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:53:13.330677 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:53:23.330718 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:53:33.330689 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:53:43.330565 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:53:53.330813 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:54:03.330797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:54:13.330649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:54:23.335400 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:54:33.330688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:54:43.330840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:54:53.330780 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:55:03.330703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:55:13.330851 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:55:23.330653 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:55:33.330684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:55:43.330706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:55:53.330898 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:56:03.330711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:56:13.331075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:56:23.331009 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 17:56:33.330990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
17:56:35 up 1:39, 0 users, load average: 0.29, 1.36, 1.99
Linux old-k8s-version-743648 5.15.0-1071-aws #77~20.04.1-Ubuntu SMP Thu Oct 3 19:34:36 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [281df8d7ccec716f0fc1c8e942acef13277e5fa97077ae40c404e6d69dda5572] <==
I1028 17:54:32.924257 1 main.go:300] handling current node
I1028 17:54:42.925415 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:54:42.925509 1 main.go:300] handling current node
I1028 17:54:52.920204 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:54:52.920241 1 main.go:300] handling current node
I1028 17:55:02.921795 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:55:02.921827 1 main.go:300] handling current node
I1028 17:55:12.928862 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:55:12.928897 1 main.go:300] handling current node
I1028 17:55:22.921111 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:55:22.921203 1 main.go:300] handling current node
I1028 17:55:32.925180 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:55:32.925225 1 main.go:300] handling current node
I1028 17:55:42.927488 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:55:42.927526 1 main.go:300] handling current node
I1028 17:55:52.919108 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:55:52.919144 1 main.go:300] handling current node
I1028 17:56:02.925579 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:56:02.925614 1 main.go:300] handling current node
I1028 17:56:12.928652 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:56:12.928687 1 main.go:300] handling current node
I1028 17:56:22.928646 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:56:22.928677 1 main.go:300] handling current node
I1028 17:56:32.920842 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:56:32.920880 1 main.go:300] handling current node
==> kindnet [eef038eb4862c59e275e52ad2ccc7e110478d7d67b0ed0af7e26ca0f5159430d] <==
I1028 17:48:31.621158 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I1028 17:48:31.917755 1 controller.go:338] Starting controller kube-network-policies
I1028 17:48:31.917848 1 controller.go:342] Waiting for informer caches to sync
I1028 17:48:31.918437 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I1028 17:48:32.120649 1 shared_informer.go:320] Caches are synced for kube-network-policies
I1028 17:48:32.120675 1 metrics.go:61] Registering metrics
I1028 17:48:32.120721 1 controller.go:378] Syncing nftables rules
I1028 17:48:41.920675 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:48:41.920757 1 main.go:300] handling current node
I1028 17:48:51.917849 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:48:51.917890 1 main.go:300] handling current node
I1028 17:49:01.925726 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:49:01.925766 1 main.go:300] handling current node
I1028 17:49:11.926536 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:49:11.926575 1 main.go:300] handling current node
I1028 17:49:21.917597 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:49:21.917632 1 main.go:300] handling current node
I1028 17:49:31.918638 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:49:31.918673 1 main.go:300] handling current node
I1028 17:49:41.926679 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:49:41.926712 1 main.go:300] handling current node
I1028 17:49:51.923849 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:49:51.923884 1 main.go:300] handling current node
I1028 17:50:01.917608 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 17:50:01.917650 1 main.go:300] handling current node
==> kube-apiserver [be449cd4e4ecc581a0ca9ed37c8fda624d9f4250c08451fefe3886e4322c02b6] <==
I1028 17:53:07.219337 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 17:53:07.219346 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 17:53:42.761182 1 client.go:360] parsed scheme: "passthrough"
I1028 17:53:42.761297 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 17:53:42.761335 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1028 17:53:52.835637 1 handler_proxy.go:102] no RequestInfo found in the context
E1028 17:53:52.835729 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1028 17:53:52.835749 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1028 17:54:15.931701 1 client.go:360] parsed scheme: "passthrough"
I1028 17:54:15.931745 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 17:54:15.931956 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 17:54:55.175043 1 client.go:360] parsed scheme: "passthrough"
I1028 17:54:55.175089 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 17:54:55.175098 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 17:55:38.015551 1 client.go:360] parsed scheme: "passthrough"
I1028 17:55:38.016746 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 17:55:38.016788 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1028 17:55:49.867709 1 handler_proxy.go:102] no RequestInfo found in the context
E1028 17:55:49.867975 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1028 17:55:49.868003 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1028 17:56:11.169962 1 client.go:360] parsed scheme: "passthrough"
I1028 17:56:11.170007 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 17:56:11.170016 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [c2a071551dfccbcb237c217f313624253b80441d6a33cd928338054bbfa893f2] <==
I1028 17:48:09.236059 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1028 17:48:09.236112 1 apf_controller.go:253] Running API Priority and Fairness config worker
I1028 17:48:09.292048 1 controller.go:606] quota admission added evaluator for: namespaces
I1028 17:48:09.963307 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1028 17:48:09.963334 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1028 17:48:09.971698 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1028 17:48:09.976072 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1028 17:48:09.976102 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1028 17:48:10.498168 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1028 17:48:10.553008 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1028 17:48:10.679708 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I1028 17:48:10.680973 1 controller.go:606] quota admission added evaluator for: endpoints
I1028 17:48:10.685151 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1028 17:48:10.963509 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1028 17:48:11.689860 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1028 17:48:12.405487 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1028 17:48:12.489839 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1028 17:48:27.761752 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1028 17:48:27.787148 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1028 17:48:47.886155 1 client.go:360] parsed scheme: "passthrough"
I1028 17:48:47.886200 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 17:48:47.886345 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 17:49:31.923523 1 client.go:360] parsed scheme: "passthrough"
I1028 17:49:31.923617 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 17:49:31.923659 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [07fd4f4cf88e3d8dc2aded1a9598aa51e6c223d50fb52791d54c9b3add751a45] <==
I1028 17:48:27.865373 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I1028 17:48:27.865725 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I1028 17:48:27.867001 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
E1028 17:48:27.875514 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E1028 17:48:27.879712 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I1028 17:48:27.881970 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-97b6q"
I1028 17:48:27.882013 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dzxmm"
I1028 17:48:27.898379 1 shared_informer.go:247] Caches are synced for service account
I1028 17:48:27.910512 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zzhjw"
I1028 17:48:27.935710 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-bgvdq"
I1028 17:48:27.950680 1 shared_informer.go:247] Caches are synced for resource quota
E1028 17:48:28.009883 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I1028 17:48:28.014018 1 shared_informer.go:247] Caches are synced for HPA
I1028 17:48:28.168446 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1028 17:48:28.380988 1 shared_informer.go:247] Caches are synced for garbage collector
I1028 17:48:28.397937 1 shared_informer.go:247] Caches are synced for garbage collector
I1028 17:48:28.397958 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1028 17:48:28.597715 1 request.go:655] Throttling request took 1.048071519s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
I1028 17:48:29.400761 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1028 17:48:29.400795 1 shared_informer.go:247] Caches are synced for resource quota
I1028 17:48:29.582550 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I1028 17:48:29.651725 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-bgvdq"
I1028 17:48:32.702582 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I1028 17:50:06.724159 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E1028 17:50:06.877962 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
==> kube-controller-manager [45d2c62bb3793fa6d89a6c164b4374c47a3f66a974d945765740acb8ccbedcb6] <==
W1028 17:52:14.964054 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 17:52:41.027833 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 17:52:46.614371 1 request.go:655] Throttling request took 1.048275299s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1028 17:52:47.465834 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 17:53:11.529839 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 17:53:19.116346 1 request.go:655] Throttling request took 1.048242119s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
W1028 17:53:20.008739 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 17:53:42.031925 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 17:53:51.667739 1 request.go:655] Throttling request took 1.04562165s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W1028 17:53:52.519164 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 17:54:12.534083 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 17:54:24.169637 1 request.go:655] Throttling request took 1.048365924s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1028 17:54:25.021432 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 17:54:43.035983 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 17:54:56.672043 1 request.go:655] Throttling request took 1.048248281s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1028 17:54:57.523540 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 17:55:13.538025 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 17:55:29.173913 1 request.go:655] Throttling request took 1.048232162s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
W1028 17:55:30.026151 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 17:55:44.040333 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 17:56:01.699631 1 request.go:655] Throttling request took 1.036877594s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1028 17:56:02.551307 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 17:56:14.542422 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 17:56:34.201877 1 request.go:655] Throttling request took 1.048028506s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
W1028 17:56:35.055405 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-proxy [251d74de7c72ac448ce2eade1fbfd2803813278b8c45ac412fb36f91a30f089b] <==
I1028 17:50:53.384233 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1028 17:50:53.384331 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1028 17:50:53.442000 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1028 17:50:53.442288 1 server_others.go:185] Using iptables Proxier.
I1028 17:50:53.442679 1 server.go:650] Version: v1.20.0
I1028 17:50:53.453106 1 config.go:315] Starting service config controller
I1028 17:50:53.453318 1 shared_informer.go:240] Waiting for caches to sync for service config
I1028 17:50:53.453430 1 config.go:224] Starting endpoint slice config controller
I1028 17:50:53.453521 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1028 17:50:53.553496 1 shared_informer.go:247] Caches are synced for service config
I1028 17:50:53.553686 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [f8257840a24c69fe77133dee32fa0a94a2355d1b8d5e80976b9501b041d4e8c1] <==
I1028 17:48:29.071518 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1028 17:48:29.071597 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1028 17:48:29.096921 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1028 17:48:29.097038 1 server_others.go:185] Using iptables Proxier.
I1028 17:48:29.097283 1 server.go:650] Version: v1.20.0
I1028 17:48:29.100713 1 config.go:315] Starting service config controller
I1028 17:48:29.100725 1 shared_informer.go:240] Waiting for caches to sync for service config
I1028 17:48:29.100746 1 config.go:224] Starting endpoint slice config controller
I1028 17:48:29.100750 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1028 17:48:29.200880 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1028 17:48:29.200948 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [0f6d7fac26d934dce38f183307147896200c44885782c7111659acfe936895ca] <==
I1028 17:50:42.289320 1 serving.go:331] Generated self-signed cert in-memory
W1028 17:50:48.757388 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1028 17:50:48.757429 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1028 17:50:48.757442 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1028 17:50:48.757447 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1028 17:50:49.057604 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1028 17:50:49.070734 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1028 17:50:49.070779 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1028 17:50:49.070800 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1028 17:50:49.271694 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [e659d3edb50edb51ab6db59744e6335c9d2964e447e8376879aafa92499e8d98] <==
W1028 17:48:09.178863 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1028 17:48:09.178897 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1028 17:48:09.178908 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1028 17:48:09.178914 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1028 17:48:09.289982 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1028 17:48:09.290004 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1028 17:48:09.290771 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1028 17:48:09.290197 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1028 17:48:09.304875 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1028 17:48:09.305398 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1028 17:48:09.305625 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1028 17:48:09.305836 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1028 17:48:09.306045 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1028 17:48:09.306271 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1028 17:48:09.306475 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1028 17:48:09.306686 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1028 17:48:09.306889 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1028 17:48:09.307064 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1028 17:48:09.307254 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1028 17:48:09.307448 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1028 17:48:10.173176 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1028 17:48:10.229458 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1028 17:48:10.254472 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1028 17:48:10.324655 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1028 17:48:12.390943 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Oct 28 17:54:54 old-k8s-version-743648 kubelet[666]: E1028 17:54:54.140745 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:05 old-k8s-version-743648 kubelet[666]: I1028 17:55:05.137332 666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f
Oct 28 17:55:05 old-k8s-version-743648 kubelet[666]: E1028 17:55:05.137675 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:06 old-k8s-version-743648 kubelet[666]: E1028 17:55:06.141555 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: I1028 17:55:18.141555 666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f
Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.142304 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:18 old-k8s-version-743648 kubelet[666]: E1028 17:55:18.144881 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.145457 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: I1028 17:55:30.147543 666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f
Oct 28 17:55:30 old-k8s-version-743648 kubelet[666]: E1028 17:55:30.148252 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: I1028 17:55:45.142270 666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f
Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.143236 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:45 old-k8s-version-743648 kubelet[666]: E1028 17:55:45.149385 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: I1028 17:55:56.137715 666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f
Oct 28 17:55:56 old-k8s-version-743648 kubelet[666]: E1028 17:55:56.138548 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:55:59 old-k8s-version-743648 kubelet[666]: E1028 17:55:59.138036 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:56:11 old-k8s-version-743648 kubelet[666]: I1028 17:56:11.139857 666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f
Oct 28 17:56:11 old-k8s-version-743648 kubelet[666]: E1028 17:56:11.140245 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:56:13 old-k8s-version-743648 kubelet[666]: E1028 17:56:13.138081 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 17:56:22 old-k8s-version-743648 kubelet[666]: I1028 17:56:22.137678 666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 97a314c29d1a7bf3d28628be00c27d765855c61ba2697a31be56b8830b3fbf2f
Oct 28 17:56:22 old-k8s-version-743648 kubelet[666]: E1028 17:56:22.139322 666 pod_workers.go:191] Error syncing pod 712fa420-47ed-4b4c-9393-81f7de5567ce ("dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jx67q_kubernetes-dashboard(712fa420-47ed-4b4c-9393-81f7de5567ce)"
Oct 28 17:56:25 old-k8s-version-743648 kubelet[666]: E1028 17:56:25.158224 666 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 28 17:56:25 old-k8s-version-743648 kubelet[666]: E1028 17:56:25.158282 666 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 28 17:56:25 old-k8s-version-743648 kubelet[666]: E1028 17:56:25.158425 666 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-s8546,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-4qgm8_kube-system(7233889
6-039d-4572-89ae-d308c555fbbf): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 28 17:56:25 old-k8s-version-743648 kubelet[666]: E1028 17:56:25.158461 666 pod_workers.go:191] Error syncing pod 72338896-039d-4572-89ae-d308c555fbbf ("metrics-server-9975d5f86-4qgm8_kube-system(72338896-039d-4572-89ae-d308c555fbbf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
==> kubernetes-dashboard [616276ec12d451a51fff6336da444384ed153fbabadc71092e8a7c724398c245] <==
2024/10/28 17:51:14 Using namespace: kubernetes-dashboard
2024/10/28 17:51:14 Using in-cluster config to connect to apiserver
2024/10/28 17:51:14 Using secret token for csrf signing
2024/10/28 17:51:14 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/10/28 17:51:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/10/28 17:51:14 Successful initial request to the apiserver, version: v1.20.0
2024/10/28 17:51:14 Generating JWE encryption key
2024/10/28 17:51:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/10/28 17:51:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/10/28 17:51:15 Initializing JWE encryption key from synchronized object
2024/10/28 17:51:15 Creating in-cluster Sidecar client
2024/10/28 17:51:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:51:15 Serving insecurely on HTTP port: 9090
2024/10/28 17:51:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:52:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:52:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:53:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:53:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:54:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:54:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:55:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:55:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:56:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 17:51:14 Starting overwatch
==> storage-provisioner [1aef6c7a010641c3e8bf5bbb6a6f0f188a15cc0faf087e71c2fdba8fe931c220] <==
I1028 17:51:36.360428 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1028 17:51:36.392696 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1028 17:51:36.392857 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1028 17:51:53.872083 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1028 17:51:53.872444 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-743648_125f5f55-0654-4195-8e5a-fd25c19c4bc9!
I1028 17:51:53.873958 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5d49336d-1a77-47b8-a8ce-ce674c284004", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-743648_125f5f55-0654-4195-8e5a-fd25c19c4bc9 became leader
I1028 17:51:53.972796 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-743648_125f5f55-0654-4195-8e5a-fd25c19c4bc9!
==> storage-provisioner [7dafca81669967e40246c9d3a7d8737952b84284dae042835a4a590f12a77f78] <==
I1028 17:50:52.559464 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1028 17:51:22.561690 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-743648 -n old-k8s-version-743648
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-743648 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-4qgm8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-743648 describe pod metrics-server-9975d5f86-4qgm8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-743648 describe pod metrics-server-9975d5f86-4qgm8: exit status 1 (130.707545ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-4qgm8" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-743648 describe pod metrics-server-9975d5f86-4qgm8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.31s)