=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-140749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-140749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m9.27692981s)
-- stdout --
* [old-k8s-version-140749] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20242
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
* Using the docker driver based on existing profile
* Starting "old-k8s-version-140749" primary control-plane node in "old-k8s-version-140749" cluster
* Pulling base image v0.0.46 ...
* Restarting existing docker container for "old-k8s-version-140749" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-140749 addons enable metrics-server
* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
-- /stdout --
** stderr **
I0120 14:26:56.833498 950903 out.go:345] Setting OutFile to fd 1 ...
I0120 14:26:56.833721 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:26:56.833734 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:26:56.833740 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:26:56.833986 950903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
I0120 14:26:56.834361 950903 out.go:352] Setting JSON to false
I0120 14:26:56.835390 950903 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14962,"bootTime":1737368255,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0120 14:26:56.835465 950903 start.go:139] virtualization:
I0120 14:26:56.840767 950903 out.go:177] * [old-k8s-version-140749] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0120 14:26:56.844020 950903 out.go:177] - MINIKUBE_LOCATION=20242
I0120 14:26:56.844069 950903 notify.go:220] Checking for updates...
I0120 14:26:56.850532 950903 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 14:26:56.853411 950903 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
I0120 14:26:56.856208 950903 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
I0120 14:26:56.859050 950903 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0120 14:26:56.861896 950903 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 14:26:56.865346 950903 config.go:182] Loaded profile config "old-k8s-version-140749": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 14:26:56.868998 950903 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
I0120 14:26:56.871948 950903 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 14:26:56.916245 950903 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
I0120 14:26:56.916380 950903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 14:26:57.002870 950903 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 14:26:56.990165693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 14:26:57.002985 950903 docker.go:318] overlay module found
I0120 14:26:57.006925 950903 out.go:177] * Using the docker driver based on existing profile
I0120 14:26:57.009867 950903 start.go:297] selected driver: docker
I0120 14:26:57.009898 950903 start.go:901] validating driver "docker" against &{Name:old-k8s-version-140749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:26:57.010024 950903 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 14:26:57.010767 950903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 14:26:57.089550 950903 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 14:26:57.078888764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 14:26:57.090055 950903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 14:26:57.090088 950903 cni.go:84] Creating CNI manager for ""
I0120 14:26:57.090291 950903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 14:26:57.090365 950903 start.go:340] cluster config:
{Name:old-k8s-version-140749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:26:57.095738 950903 out.go:177] * Starting "old-k8s-version-140749" primary control-plane node in "old-k8s-version-140749" cluster
I0120 14:26:57.098754 950903 cache.go:121] Beginning downloading kic base image for docker with containerd
I0120 14:26:57.101886 950903 out.go:177] * Pulling base image v0.0.46 ...
I0120 14:26:57.104832 950903 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 14:26:57.104905 950903 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0120 14:26:57.104917 950903 cache.go:56] Caching tarball of preloaded images
I0120 14:26:57.105028 950903 preload.go:172] Found /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0120 14:26:57.105044 950903 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0120 14:26:57.105170 950903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/config.json ...
I0120 14:26:57.105443 950903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0120 14:26:57.139935 950903 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0120 14:26:57.139959 950903 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0120 14:26:57.139973 950903 cache.go:227] Successfully downloaded all kic artifacts
I0120 14:26:57.140005 950903 start.go:360] acquireMachinesLock for old-k8s-version-140749: {Name:mk3b1de2e93537f0dae30829ba65f2718277905f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:26:57.140069 950903 start.go:364] duration metric: took 37.9µs to acquireMachinesLock for "old-k8s-version-140749"
I0120 14:26:57.140093 950903 start.go:96] Skipping create...Using existing machine configuration
I0120 14:26:57.140099 950903 fix.go:54] fixHost starting:
I0120 14:26:57.140372 950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
I0120 14:26:57.161116 950903 fix.go:112] recreateIfNeeded on old-k8s-version-140749: state=Stopped err=<nil>
W0120 14:26:57.161150 950903 fix.go:138] unexpected machine state, will restart: <nil>
I0120 14:26:57.164163 950903 out.go:177] * Restarting existing docker container for "old-k8s-version-140749" ...
I0120 14:26:57.167030 950903 cli_runner.go:164] Run: docker start old-k8s-version-140749
I0120 14:26:57.527115 950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
I0120 14:26:57.563993 950903 kic.go:430] container "old-k8s-version-140749" state is running.
I0120 14:26:57.564902 950903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140749
I0120 14:26:57.601328 950903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/config.json ...
I0120 14:26:57.601557 950903 machine.go:93] provisionDockerMachine start ...
I0120 14:26:57.601678 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:26:57.631658 950903 main.go:141] libmachine: Using SSH client type: native
I0120 14:26:57.631924 950903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33829 <nil> <nil>}
I0120 14:26:57.631934 950903 main.go:141] libmachine: About to run SSH command:
hostname
I0120 14:26:57.632542 950903 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50272->127.0.0.1:33829: read: connection reset by peer
I0120 14:27:00.773080 950903 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-140749
I0120 14:27:00.773104 950903 ubuntu.go:169] provisioning hostname "old-k8s-version-140749"
I0120 14:27:00.773179 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:00.803054 950903 main.go:141] libmachine: Using SSH client type: native
I0120 14:27:00.803310 950903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33829 <nil> <nil>}
I0120 14:27:00.803324 950903 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-140749 && echo "old-k8s-version-140749" | sudo tee /etc/hostname
I0120 14:27:00.962725 950903 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-140749
I0120 14:27:00.962810 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:00.992437 950903 main.go:141] libmachine: Using SSH client type: native
I0120 14:27:00.992698 950903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33829 <nil> <nil>}
I0120 14:27:00.992723 950903 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-140749' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-140749/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-140749' | sudo tee -a /etc/hosts;
fi
fi
I0120 14:27:01.135785 950903 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 14:27:01.135821 950903 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20242-741865/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-741865/.minikube}
I0120 14:27:01.135872 950903 ubuntu.go:177] setting up certificates
I0120 14:27:01.135902 950903 provision.go:84] configureAuth start
I0120 14:27:01.136000 950903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140749
I0120 14:27:01.166093 950903 provision.go:143] copyHostCerts
I0120 14:27:01.166158 950903 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem, removing ...
I0120 14:27:01.166167 950903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem
I0120 14:27:01.166249 950903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem (1078 bytes)
I0120 14:27:01.166370 950903 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem, removing ...
I0120 14:27:01.166376 950903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem
I0120 14:27:01.166403 950903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem (1123 bytes)
I0120 14:27:01.166468 950903 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem, removing ...
I0120 14:27:01.166473 950903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem
I0120 14:27:01.166497 950903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem (1679 bytes)
I0120 14:27:01.166560 950903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-140749 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-140749]
I0120 14:27:01.498196 950903 provision.go:177] copyRemoteCerts
I0120 14:27:01.498281 950903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 14:27:01.498334 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:01.516805 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:01.607677 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0120 14:27:01.635820 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0120 14:27:01.661396 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 14:27:01.686435 950903 provision.go:87] duration metric: took 550.512166ms to configureAuth
I0120 14:27:01.686511 950903 ubuntu.go:193] setting minikube options for container-runtime
I0120 14:27:01.686749 950903 config.go:182] Loaded profile config "old-k8s-version-140749": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 14:27:01.686767 950903 machine.go:96] duration metric: took 4.08520262s to provisionDockerMachine
I0120 14:27:01.686778 950903 start.go:293] postStartSetup for "old-k8s-version-140749" (driver="docker")
I0120 14:27:01.686803 950903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 14:27:01.686865 950903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 14:27:01.686923 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:01.705240 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:01.795317 950903 ssh_runner.go:195] Run: cat /etc/os-release
I0120 14:27:01.799153 950903 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 14:27:01.799212 950903 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 14:27:01.799225 950903 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 14:27:01.799234 950903 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0120 14:27:01.799245 950903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-741865/.minikube/addons for local assets ...
I0120 14:27:01.799322 950903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-741865/.minikube/files for local assets ...
I0120 14:27:01.799418 950903 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem -> 7472562.pem in /etc/ssl/certs
I0120 14:27:01.799544 950903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 14:27:01.808961 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem --> /etc/ssl/certs/7472562.pem (1708 bytes)
I0120 14:27:01.834385 950903 start.go:296] duration metric: took 147.590756ms for postStartSetup
I0120 14:27:01.834470 950903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 14:27:01.834519 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:01.852203 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:01.940455 950903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0120 14:27:01.947983 950903 fix.go:56] duration metric: took 4.807876879s for fixHost
I0120 14:27:01.948062 950903 start.go:83] releasing machines lock for "old-k8s-version-140749", held for 4.807978458s
I0120 14:27:01.948172 950903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140749
I0120 14:27:01.981284 950903 ssh_runner.go:195] Run: cat /version.json
I0120 14:27:01.981342 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:01.981646 950903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 14:27:01.981720 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:02.015999 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:02.019799 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:02.109329 950903 ssh_runner.go:195] Run: systemctl --version
I0120 14:27:02.262571 950903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0120 14:27:02.268658 950903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0120 14:27:02.309921 950903 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0120 14:27:02.310000 950903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 14:27:02.322355 950903 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0120 14:27:02.322379 950903 start.go:495] detecting cgroup driver to use...
I0120 14:27:02.322412 950903 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0120 14:27:02.322468 950903 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 14:27:02.356989 950903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 14:27:02.372020 950903 docker.go:217] disabling cri-docker service (if available) ...
I0120 14:27:02.372084 950903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 14:27:02.390728 950903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 14:27:02.404593 950903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 14:27:02.509473 950903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 14:27:02.631861 950903 docker.go:233] disabling docker service ...
I0120 14:27:02.631932 950903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 14:27:02.648831 950903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 14:27:02.664093 950903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 14:27:02.797207 950903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 14:27:02.917442 950903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 14:27:02.938070 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 14:27:02.956438 950903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0120 14:27:02.967629 950903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 14:27:02.978634 950903 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 14:27:02.978703 950903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 14:27:02.990131 950903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 14:27:03.003766 950903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 14:27:03.020025 950903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 14:27:03.032399 950903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 14:27:03.043485 950903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 14:27:03.060540 950903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 14:27:03.072981 950903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 14:27:03.083153 950903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:27:03.201649 950903 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 14:27:03.464533 950903 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 14:27:03.464613 950903 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 14:27:03.472726 950903 start.go:563] Will wait 60s for crictl version
I0120 14:27:03.472798 950903 ssh_runner.go:195] Run: which crictl
I0120 14:27:03.476778 950903 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 14:27:03.549634 950903 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0120 14:27:03.549724 950903 ssh_runner.go:195] Run: containerd --version
I0120 14:27:03.579985 950903 ssh_runner.go:195] Run: containerd --version
I0120 14:27:03.614734 950903 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
I0120 14:27:03.618334 950903 cli_runner.go:164] Run: docker network inspect old-k8s-version-140749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 14:27:03.637347 950903 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0120 14:27:03.641242 950903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 14:27:03.660806 950903 kubeadm.go:883] updating cluster {Name:old-k8s-version-140749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 14:27:03.660939 950903 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 14:27:03.661016 950903 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 14:27:03.740304 950903 containerd.go:627] all images are preloaded for containerd runtime.
I0120 14:27:03.740326 950903 containerd.go:534] Images already preloaded, skipping extraction
I0120 14:27:03.740386 950903 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 14:27:03.814311 950903 containerd.go:627] all images are preloaded for containerd runtime.
I0120 14:27:03.814335 950903 cache_images.go:84] Images are preloaded, skipping loading
I0120 14:27:03.814350 950903 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
I0120 14:27:03.814512 950903 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-140749 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 14:27:03.814586 950903 ssh_runner.go:195] Run: sudo crictl info
I0120 14:27:03.892671 950903 cni.go:84] Creating CNI manager for ""
I0120 14:27:03.892695 950903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 14:27:03.892704 950903 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 14:27:03.892726 950903 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-140749 NodeName:old-k8s-version-140749 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0120 14:27:03.892847 950903 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-140749"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 14:27:03.892912 950903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0120 14:27:03.911458 950903 binaries.go:44] Found k8s binaries, skipping transfer
I0120 14:27:03.911526 950903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 14:27:03.928903 950903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0120 14:27:03.965397 950903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 14:27:04.023109 950903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0120 14:27:04.050986 950903 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0120 14:27:04.054834 950903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 14:27:04.083916 950903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:27:04.239100 950903 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:27:04.275100 950903 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749 for IP: 192.168.85.2
I0120 14:27:04.275119 950903 certs.go:194] generating shared ca certs ...
I0120 14:27:04.275142 950903 certs.go:226] acquiring lock for ca certs: {Name:mka7a6ccd7d8b5f47789c70c8e6dc479acdcdb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:27:04.275335 950903 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.key
I0120 14:27:04.275596 950903 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.key
I0120 14:27:04.275610 950903 certs.go:256] generating profile certs ...
I0120 14:27:04.275718 950903 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.key
I0120 14:27:04.275792 950903 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/apiserver.key.f3a616b9
I0120 14:27:04.276033 950903 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/proxy-client.key
I0120 14:27:04.276331 950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256.pem (1338 bytes)
W0120 14:27:04.276431 950903 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256_empty.pem, impossibly tiny 0 bytes
I0120 14:27:04.276460 950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem (1679 bytes)
I0120 14:27:04.276538 950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem (1078 bytes)
I0120 14:27:04.276583 950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem (1123 bytes)
I0120 14:27:04.276610 950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem (1679 bytes)
I0120 14:27:04.276665 950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem (1708 bytes)
I0120 14:27:04.277436 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 14:27:04.340286 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 14:27:04.379786 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 14:27:04.414376 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0120 14:27:04.450238 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0120 14:27:04.489721 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0120 14:27:04.517446 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 14:27:04.542907 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0120 14:27:04.568353 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 14:27:04.635339 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256.pem --> /usr/share/ca-certificates/747256.pem (1338 bytes)
I0120 14:27:04.706670 950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem --> /usr/share/ca-certificates/7472562.pem (1708 bytes)
I0120 14:27:04.755765 950903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 14:27:04.797728 950903 ssh_runner.go:195] Run: openssl version
I0120 14:27:04.806204 950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 14:27:04.819280 950903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 14:27:04.823993 950903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 13:39 /usr/share/ca-certificates/minikubeCA.pem
I0120 14:27:04.824269 950903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 14:27:04.840819 950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 14:27:04.860113 950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/747256.pem && ln -fs /usr/share/ca-certificates/747256.pem /etc/ssl/certs/747256.pem"
I0120 14:27:04.874755 950903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/747256.pem
I0120 14:27:04.878988 950903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 13:48 /usr/share/ca-certificates/747256.pem
I0120 14:27:04.879061 950903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/747256.pem
I0120 14:27:04.898042 950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/747256.pem /etc/ssl/certs/51391683.0"
I0120 14:27:04.939761 950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7472562.pem && ln -fs /usr/share/ca-certificates/7472562.pem /etc/ssl/certs/7472562.pem"
I0120 14:27:04.974905 950903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7472562.pem
I0120 14:27:04.989083 950903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 13:48 /usr/share/ca-certificates/7472562.pem
I0120 14:27:04.989166 950903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7472562.pem
I0120 14:27:04.996595 950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7472562.pem /etc/ssl/certs/3ec20f2e.0"
I0120 14:27:05.013307 950903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 14:27:05.023541 950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0120 14:27:05.037781 950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0120 14:27:05.053029 950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0120 14:27:05.072639 950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0120 14:27:05.080322 950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0120 14:27:05.090348 950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0120 14:27:05.105081 950903 kubeadm.go:392] StartCluster: {Name:old-k8s-version-140749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:27:05.105179 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 14:27:05.105275 950903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 14:27:05.177517 950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:27:05.177549 950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:27:05.177556 950903 cri.go:89] found id: "f927d850c11b6c45d5cf960f5cc2e994752352515a4ba8707751f12c497ceaad"
I0120 14:27:05.177559 950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:27:05.177562 950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:27:05.177566 950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:27:05.177569 950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:27:05.177572 950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:27:05.177576 950903 cri.go:89] found id: ""
I0120 14:27:05.177659 950903 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0120 14:27:05.197340 950903 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T14:27:05Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0120 14:27:05.197421 950903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 14:27:05.207511 950903 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0120 14:27:05.207532 950903 kubeadm.go:593] restartPrimaryControlPlane start ...
I0120 14:27:05.207586 950903 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0120 14:27:05.223964 950903 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0120 14:27:05.224406 950903 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-140749" does not appear in /home/jenkins/minikube-integration/20242-741865/kubeconfig
I0120 14:27:05.224517 950903 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-741865/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-140749" cluster setting kubeconfig missing "old-k8s-version-140749" context setting]
I0120 14:27:05.224810 950903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-741865/kubeconfig: {Name:mkcf7578b1c91d60616ac7150d8566b28a92e8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:27:05.226106 950903 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0120 14:27:05.237384 950903 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0120 14:27:05.237426 950903 kubeadm.go:597] duration metric: took 29.88751ms to restartPrimaryControlPlane
I0120 14:27:05.237440 950903 kubeadm.go:394] duration metric: took 132.369088ms to StartCluster
I0120 14:27:05.237455 950903 settings.go:142] acquiring lock: {Name:mkf7c5865cae55b4373a466e1a24783d8090ef1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:27:05.237527 950903 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20242-741865/kubeconfig
I0120 14:27:05.238332 950903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-741865/kubeconfig: {Name:mkcf7578b1c91d60616ac7150d8566b28a92e8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:27:05.238571 950903 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 14:27:05.238985 950903 config.go:182] Loaded profile config "old-k8s-version-140749": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 14:27:05.239060 950903 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 14:27:05.239186 950903 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-140749"
I0120 14:27:05.239205 950903 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-140749"
W0120 14:27:05.239212 950903 addons.go:247] addon storage-provisioner should already be in state true
I0120 14:27:05.239226 950903 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-140749"
I0120 14:27:05.239238 950903 host.go:66] Checking if "old-k8s-version-140749" exists ...
I0120 14:27:05.239245 950903 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-140749"
I0120 14:27:05.239633 950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
I0120 14:27:05.239706 950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
I0120 14:27:05.240072 950903 addons.go:69] Setting dashboard=true in profile "old-k8s-version-140749"
I0120 14:27:05.240098 950903 addons.go:238] Setting addon dashboard=true in "old-k8s-version-140749"
W0120 14:27:05.240112 950903 addons.go:247] addon dashboard should already be in state true
I0120 14:27:05.240144 950903 host.go:66] Checking if "old-k8s-version-140749" exists ...
I0120 14:27:05.240645 950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
I0120 14:27:05.243586 950903 out.go:177] * Verifying Kubernetes components...
I0120 14:27:05.243875 950903 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-140749"
I0120 14:27:05.243900 950903 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-140749"
W0120 14:27:05.243909 950903 addons.go:247] addon metrics-server should already be in state true
I0120 14:27:05.243943 950903 host.go:66] Checking if "old-k8s-version-140749" exists ...
I0120 14:27:05.244474 950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
I0120 14:27:05.247012 950903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:27:05.356825 950903 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 14:27:05.361335 950903 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 14:27:05.365757 950903 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:27:05.365783 950903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 14:27:05.365861 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:05.367141 950903 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 14:27:05.371458 950903 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-140749"
W0120 14:27:05.371506 950903 addons.go:247] addon default-storageclass should already be in state true
I0120 14:27:05.371535 950903 host.go:66] Checking if "old-k8s-version-140749" exists ...
I0120 14:27:05.372097 950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
I0120 14:27:05.372392 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 14:27:05.372419 950903 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 14:27:05.372491 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:05.429056 950903 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 14:27:05.432904 950903 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 14:27:05.432965 950903 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 14:27:05.433049 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:05.501417 950903 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:27:05.506176 950903 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 14:27:05.506207 950903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 14:27:05.506291 950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
I0120 14:27:05.508329 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:05.514397 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:05.564362 950903 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-140749" to be "Ready" ...
I0120 14:27:05.605809 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:05.607714 950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
I0120 14:27:05.767403 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 14:27:05.767482 950903 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 14:27:05.796135 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:27:05.826472 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 14:27:05.826563 950903 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 14:27:05.877079 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 14:27:05.902753 950903 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 14:27:05.902829 950903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 14:27:05.957816 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 14:27:05.957896 950903 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0120 14:27:06.046646 950903 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 14:27:06.046746 950903 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 14:27:06.215103 950903 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:27:06.215125 950903 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 14:27:06.232141 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 14:27:06.232161 950903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 14:27:06.386102 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 14:27:06.390615 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:06.390645 950903 retry.go:31] will retry after 196.091052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:06.414282 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 14:27:06.414359 950903 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W0120 14:27:06.492837 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:06.492919 950903 retry.go:31] will retry after 153.669363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:06.510021 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 14:27:06.510099 950903 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 14:27:06.587276 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:27:06.646883 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 14:27:06.656676 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:06.656759 950903 retry.go:31] will retry after 329.873089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:06.714025 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 14:27:06.714106 950903 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 14:27:06.898316 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 14:27:06.898341 950903 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0120 14:27:06.971859 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:06.971890 950903 retry.go:31] will retry after 332.523585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:06.989425 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:27:07.079692 950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:27:07.079717 950903 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
W0120 14:27:07.257035 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.257064 950903 retry.go:31] will retry after 389.315008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 14:27:07.257114 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.257121 950903 retry.go:31] will retry after 311.201685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.259990 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:27:07.305393 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 14:27:07.499873 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.499917 950903 retry.go:31] will retry after 340.602335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 14:27:07.544876 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.544907 950903 retry.go:31] will retry after 761.060402ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.565496 950903 node_ready.go:53] error getting node "old-k8s-version-140749": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140749": dial tcp 192.168.85.2:8443: connect: connection refused
I0120 14:27:07.568825 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:27:07.647360 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 14:27:07.710745 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.710774 950903 retry.go:31] will retry after 561.574304ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 14:27:07.798469 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.798499 950903 retry.go:31] will retry after 374.389711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.841755 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 14:27:07.960225 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:07.960257 950903 retry.go:31] will retry after 216.314433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:08.173268 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 14:27:08.177769 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:27:08.273899 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:27:08.306384 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 14:27:08.350735 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:08.350774 950903 retry.go:31] will retry after 1.051790544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 14:27:08.350832 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:08.350846 950903 retry.go:31] will retry after 367.123054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 14:27:08.478156 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:08.478185 950903 retry.go:31] will retry after 451.185223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 14:27:08.478221 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:08.478227 950903 retry.go:31] will retry after 1.020972988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:08.718187 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 14:27:08.833054 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:08.833094 950903 retry.go:31] will retry after 1.060513552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:08.930483 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 14:27:09.011091 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:09.011125 950903 retry.go:31] will retry after 1.634293388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:09.403187 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 14:27:09.499612 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 14:27:09.504530 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:09.504570 950903 retry.go:31] will retry after 920.703674ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 14:27:09.637074 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:09.637110 950903 retry.go:31] will retry after 1.839561779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:09.894502 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 14:27:09.988456 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:09.988486 950903 retry.go:31] will retry after 1.288416794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:10.065096 950903 node_ready.go:53] error getting node "old-k8s-version-140749": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140749": dial tcp 192.168.85.2:8443: connect: connection refused
I0120 14:27:10.425699 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 14:27:10.512242 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:10.512273 950903 retry.go:31] will retry after 1.183746708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:10.646163 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 14:27:10.775368 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:10.775399 950903 retry.go:31] will retry after 1.589620431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:11.277572 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 14:27:11.385863 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:11.385896 950903 retry.go:31] will retry after 1.201291143s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:11.477082 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 14:27:11.641364 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:11.641396 950903 retry.go:31] will retry after 2.295164699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:11.696649 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 14:27:11.829497 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:11.829527 950903 retry.go:31] will retry after 1.693668479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:12.065330 950903 node_ready.go:53] error getting node "old-k8s-version-140749": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140749": dial tcp 192.168.85.2:8443: connect: connection refused
I0120 14:27:12.366227 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 14:27:12.521560 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:12.521607 950903 retry.go:31] will retry after 3.065694042s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:12.587501 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 14:27:12.719630 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:12.719662 950903 retry.go:31] will retry after 2.315984036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:13.523637 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 14:27:13.815173 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:13.815203 950903 retry.go:31] will retry after 4.036304951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:13.937382 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:27:14.065492 950903 node_ready.go:53] error getting node "old-k8s-version-140749": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140749": dial tcp 192.168.85.2:8443: connect: connection refused
W0120 14:27:14.268764 950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:14.268793 950903 retry.go:31] will retry after 2.427911411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 14:27:15.036035 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:27:15.587504 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:27:16.697755 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:27:17.851680 950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 14:27:23.705233 950903 node_ready.go:49] node "old-k8s-version-140749" has status "Ready":"True"
I0120 14:27:23.705256 950903 node_ready.go:38] duration metric: took 18.140817176s for node "old-k8s-version-140749" to be "Ready" ...
I0120 14:27:23.705266 950903 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:27:24.090962 950903 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-qsqbp" in "kube-system" namespace to be "Ready" ...
I0120 14:27:24.174534 950903 pod_ready.go:93] pod "coredns-74ff55c5b-qsqbp" in "kube-system" namespace has status "Ready":"True"
I0120 14:27:24.174610 950903 pod_ready.go:82] duration metric: took 83.553392ms for pod "coredns-74ff55c5b-qsqbp" in "kube-system" namespace to be "Ready" ...
I0120 14:27:24.174637 950903 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:27:24.409236 950903 pod_ready.go:93] pod "etcd-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
I0120 14:27:24.409321 950903 pod_ready.go:82] duration metric: took 234.656558ms for pod "etcd-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:27:24.409352 950903 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:27:26.208292 950903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.620750711s)
I0120 14:27:26.208331 950903 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-140749"
I0120 14:27:26.208392 950903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.510611845s)
I0120 14:27:26.208417 950903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.356715266s)
I0120 14:27:26.208671 950903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.172522419s)
I0120 14:27:26.212035 950903 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-140749 addons enable metrics-server
I0120 14:27:26.290132 950903 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
I0120 14:27:26.293685 950903 addons.go:514] duration metric: took 21.054624617s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
I0120 14:27:26.444133 950903 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:28.917700 950903 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:31.416051 950903 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:32.917715 950903 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
I0120 14:27:32.917746 950903 pod_ready.go:82] duration metric: took 8.508356338s for pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:27:32.917758 950903 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:27:34.928348 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:37.426345 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:39.441421 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:41.924301 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:43.931714 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:46.424893 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:48.427826 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:50.431191 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:52.928759 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:54.928797 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:57.424755 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:27:59.924589 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:01.924859 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:03.924974 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:05.925579 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:08.425051 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:10.924139 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:12.926705 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:15.425527 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:17.924641 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:19.927625 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:22.430449 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:24.923697 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:26.924224 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:28.925595 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:30.928458 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:33.424309 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:35.424578 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:37.425229 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:39.924305 950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:41.433729 950903 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:41.433757 950903 pod_ready.go:82] duration metric: took 1m8.515990789s for pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:28:41.433770 950903 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrpl6" in "kube-system" namespace to be "Ready" ...
I0120 14:28:41.441339 950903 pod_ready.go:93] pod "kube-proxy-wrpl6" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:41.441367 950903 pod_ready.go:82] duration metric: took 7.589685ms for pod "kube-proxy-wrpl6" in "kube-system" namespace to be "Ready" ...
I0120 14:28:41.441387 950903 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:28:42.449722 950903 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:42.449748 950903 pod_ready.go:82] duration metric: took 1.008350501s for pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:28:42.449760 950903 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace to be "Ready" ...
I0120 14:28:44.455878 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:46.456063 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:48.456951 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:50.982318 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:53.457118 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:55.457832 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:57.957293 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:00.457254 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:02.957338 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:05.457281 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:07.956788 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:10.455727 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:12.455805 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:14.455919 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:16.956594 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:18.957055 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:20.957177 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:22.977897 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:25.456100 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:27.956285 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:29.957097 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:31.958545 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:34.520016 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:36.958082 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:39.455827 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:41.456412 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:43.465069 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:45.956385 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:48.456073 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:50.957169 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:53.456920 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:55.460163 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:57.956176 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:59.957071 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:01.967055 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:04.456138 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:06.956305 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:08.956902 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:11.455925 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:13.956200 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:15.956651 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:17.956978 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:19.957565 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:22.456276 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:24.970006 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:27.456774 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:29.463141 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:31.957178 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:34.455767 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:36.956611 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:39.456640 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:41.956918 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:43.973328 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:46.455494 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:48.455765 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:50.456716 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:52.956367 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:54.956544 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:57.457408 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:59.955937 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:01.957235 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:03.958142 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:06.461136 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:08.956483 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:10.956661 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:13.456294 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:15.456562 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:17.955801 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:19.956567 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:21.956906 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:23.956980 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:26.458636 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:28.957544 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:31.456217 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:33.956541 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:36.456146 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:38.456341 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:40.456681 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:42.955903 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:44.956479 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:47.456415 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:49.956064 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:51.956572 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:54.456227 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:56.456785 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:58.956968 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:00.957085 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:02.957264 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:04.962625 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:07.455559 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:09.455774 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:11.456500 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:13.956820 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:16.025898 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:18.457623 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:20.957089 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:23.456405 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:25.955753 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:28.456663 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:30.463692 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:32.956881 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:34.956937 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:36.960987 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:39.456248 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:41.456476 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:42.456347 950903 pod_ready.go:82] duration metric: took 4m0.0065748s for pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace to be "Ready" ...
E0120 14:32:42.456373 950903 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 14:32:42.456384 950903 pod_ready.go:39] duration metric: took 5m18.75110665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:32:42.456400 950903 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:32:42.456430 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:32:42.456494 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:32:42.495561 950903 cri.go:89] found id: "7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
I0120 14:32:42.495581 950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:32:42.495586 950903 cri.go:89] found id: ""
I0120 14:32:42.495593 950903 logs.go:282] 2 containers: [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63]
I0120 14:32:42.495650 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.499420 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.502920 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:32:42.503009 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:32:42.542022 950903 cri.go:89] found id: "260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
I0120 14:32:42.542087 950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:32:42.542106 950903 cri.go:89] found id: ""
I0120 14:32:42.542131 950903 logs.go:282] 2 containers: [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b]
I0120 14:32:42.542221 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.546159 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.549559 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:32:42.549707 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:32:42.588844 950903 cri.go:89] found id: "df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
I0120 14:32:42.588910 950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:32:42.588931 950903 cri.go:89] found id: ""
I0120 14:32:42.588965 950903 logs.go:282] 2 containers: [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d]
I0120 14:32:42.589060 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.593064 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.596734 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:32:42.596827 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:32:42.637742 950903 cri.go:89] found id: "901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
I0120 14:32:42.637766 950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:32:42.637772 950903 cri.go:89] found id: ""
I0120 14:32:42.637779 950903 logs.go:282] 2 containers: [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071]
I0120 14:32:42.637837 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.641531 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.645214 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:32:42.645294 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:32:42.694848 950903 cri.go:89] found id: "980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
I0120 14:32:42.694873 950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:32:42.694878 950903 cri.go:89] found id: ""
I0120 14:32:42.694885 950903 logs.go:282] 2 containers: [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25]
I0120 14:32:42.694944 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.698884 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.702523 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:32:42.702604 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:32:42.744000 950903 cri.go:89] found id: "cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
I0120 14:32:42.744031 950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:32:42.744037 950903 cri.go:89] found id: ""
I0120 14:32:42.744045 950903 logs.go:282] 2 containers: [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec]
I0120 14:32:42.744145 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.748068 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.751593 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:32:42.751671 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:32:42.788738 950903 cri.go:89] found id: "15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
I0120 14:32:42.788761 950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:32:42.788766 950903 cri.go:89] found id: ""
I0120 14:32:42.788773 950903 logs.go:282] 2 containers: [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f]
I0120 14:32:42.788833 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.792694 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.796248 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:32:42.796327 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:32:42.835380 950903 cri.go:89] found id: "c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
I0120 14:32:42.835402 950903 cri.go:89] found id: ""
I0120 14:32:42.835411 950903 logs.go:282] 1 containers: [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6]
I0120 14:32:42.835470 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.839424 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:32:42.839588 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:32:42.886867 950903 cri.go:89] found id: "0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
I0120 14:32:42.886943 950903 cri.go:89] found id: "46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
I0120 14:32:42.886963 950903 cri.go:89] found id: ""
I0120 14:32:42.886990 950903 logs.go:282] 2 containers: [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa]
I0120 14:32:42.887084 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.892761 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.897255 950903 logs.go:123] Gathering logs for dmesg ...
I0120 14:32:42.897281 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:32:42.915606 950903 logs.go:123] Gathering logs for describe nodes ...
I0120 14:32:42.915635 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:32:43.086993 950903 logs.go:123] Gathering logs for etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] ...
I0120 14:32:43.087027 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
I0120 14:32:43.137045 950903 logs.go:123] Gathering logs for coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] ...
I0120 14:32:43.137078 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:32:43.177316 950903 logs.go:123] Gathering logs for kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] ...
I0120 14:32:43.177346 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:32:43.226521 950903 logs.go:123] Gathering logs for kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] ...
I0120 14:32:43.226552 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
I0120 14:32:43.277166 950903 logs.go:123] Gathering logs for containerd ...
I0120 14:32:43.277198 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:32:43.350057 950903 logs.go:123] Gathering logs for kubelet ...
I0120 14:32:43.350162 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 14:32:43.415129 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.690899 663 reflector.go:138] object-"kube-system"/"coredns-token-f95sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f95sh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.415423 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691117 663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.415671 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691376 663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.415917 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691453 663 reflector.go:138] object-"default"/"default-token-8wp7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8wp7x" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.416155 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691503 663 reflector.go:138] object-"kube-system"/"kindnet-token-xx7dh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xx7dh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.416381 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691562 663 reflector.go:138] object-"kube-system"/"kube-proxy-token-s6tbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s6tbt" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.416607 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691635 663 reflector.go:138] object-"kube-system"/"metrics-server-token-dgscp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dgscp" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.416848 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.692028 663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mlrbf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-mlrbf" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.425962 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:27 old-k8s-version-140749 kubelet[663]: E0120 14:27:27.904251 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.426161 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:28 old-k8s-version-140749 kubelet[663]: E0120 14:27:28.466147 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.428994 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:41 old-k8s-version-140749 kubelet[663]: E0120 14:27:41.953783 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.430807 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:43 old-k8s-version-140749 kubelet[663]: E0120 14:27:43.761273 663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-xh79t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-xh79t" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.431350 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:55 old-k8s-version-140749 kubelet[663]: E0120 14:27:55.965627 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.432061 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:57 old-k8s-version-140749 kubelet[663]: E0120 14:27:57.605396 663 pod_workers.go:191] Error syncing pod e9c231b5-a5c1-498d-aa26-caf987208dc2 ("storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"
W0120 14:32:43.432532 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:58 old-k8s-version-140749 kubelet[663]: E0120 14:27:58.615334 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.432875 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:59 old-k8s-version-140749 kubelet[663]: E0120 14:27:59.633074 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.433786 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:05 old-k8s-version-140749 kubelet[663]: E0120 14:28:05.586265 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.436345 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:06 old-k8s-version-140749 kubelet[663]: E0120 14:28:06.962070 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.436803 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:17 old-k8s-version-140749 kubelet[663]: E0120 14:28:17.944316 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.437380 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:18 old-k8s-version-140749 kubelet[663]: E0120 14:28:18.685866 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.437740 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:25 old-k8s-version-140749 kubelet[663]: E0120 14:28:25.586275 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.437928 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:32 old-k8s-version-140749 kubelet[663]: E0120 14:28:32.945271 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.438346 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:37 old-k8s-version-140749 kubelet[663]: E0120 14:28:37.943400 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.438539 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:46 old-k8s-version-140749 kubelet[663]: E0120 14:28:46.943760 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.439145 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:51 old-k8s-version-140749 kubelet[663]: E0120 14:28:51.768837 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.439485 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:55 old-k8s-version-140749 kubelet[663]: E0120 14:28:55.585724 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.442279 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:00 old-k8s-version-140749 kubelet[663]: E0120 14:29:00.952397 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.442626 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:10 old-k8s-version-140749 kubelet[663]: E0120 14:29:10.942909 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.442824 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:11 old-k8s-version-140749 kubelet[663]: E0120 14:29:11.944209 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.443346 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:22 old-k8s-version-140749 kubelet[663]: E0120 14:29:22.951255 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.443537 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:23 old-k8s-version-140749 kubelet[663]: E0120 14:29:23.956425 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.444140 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:35 old-k8s-version-140749 kubelet[663]: E0120 14:29:35.903419 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.444330 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:36 old-k8s-version-140749 kubelet[663]: E0120 14:29:36.945915 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.444679 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:45 old-k8s-version-140749 kubelet[663]: E0120 14:29:45.585844 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.444873 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:51 old-k8s-version-140749 kubelet[663]: E0120 14:29:51.943550 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.445206 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:59 old-k8s-version-140749 kubelet[663]: E0120 14:29:59.943021 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.445390 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:05 old-k8s-version-140749 kubelet[663]: E0120 14:30:05.943119 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.445746 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:13 old-k8s-version-140749 kubelet[663]: E0120 14:30:13.942813 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.445986 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:17 old-k8s-version-140749 kubelet[663]: E0120 14:30:17.943166 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.446323 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.943282 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.451516 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.959102 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.451888 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:40 old-k8s-version-140749 kubelet[663]: E0120 14:30:40.946333 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.452090 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:43 old-k8s-version-140749 kubelet[663]: E0120 14:30:43.946388 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.452419 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:52 old-k8s-version-140749 kubelet[663]: E0120 14:30:52.943384 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.452606 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:57 old-k8s-version-140749 kubelet[663]: E0120 14:30:57.943462 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.453215 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:04 old-k8s-version-140749 kubelet[663]: E0120 14:31:04.184881 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.453555 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:05 old-k8s-version-140749 kubelet[663]: E0120 14:31:05.586278 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.453747 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:11 old-k8s-version-140749 kubelet[663]: E0120 14:31:11.943489 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.454085 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:19 old-k8s-version-140749 kubelet[663]: E0120 14:31:19.942873 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.454273 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.454460 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.454796 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.454982 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.455332 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.455568 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.455909 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.456251 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.456438 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.456624 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.456954 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.457154 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.457520 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
I0120 14:32:43.457532 950903 logs.go:123] Gathering logs for kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] ...
I0120 14:32:43.457547 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:32:43.513402 950903 logs.go:123] Gathering logs for kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] ...
I0120 14:32:43.513432 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
I0120 14:32:43.575002 950903 logs.go:123] Gathering logs for kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] ...
I0120 14:32:43.575049 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:32:43.635251 950903 logs.go:123] Gathering logs for storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] ...
I0120 14:32:43.635291 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
I0120 14:32:43.679772 950903 logs.go:123] Gathering logs for kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] ...
I0120 14:32:43.679802 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
I0120 14:32:43.725126 950903 logs.go:123] Gathering logs for kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] ...
I0120 14:32:43.725160 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
I0120 14:32:43.764221 950903 logs.go:123] Gathering logs for storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] ...
I0120 14:32:43.764246 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
I0120 14:32:43.803933 950903 logs.go:123] Gathering logs for kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] ...
I0120 14:32:43.803963 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
I0120 14:32:43.865136 950903 logs.go:123] Gathering logs for kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] ...
I0120 14:32:43.865173 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:32:43.927846 950903 logs.go:123] Gathering logs for etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] ...
I0120 14:32:43.927885 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:32:43.976062 950903 logs.go:123] Gathering logs for coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] ...
I0120 14:32:43.976150 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
I0120 14:32:44.017480 950903 logs.go:123] Gathering logs for kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] ...
I0120 14:32:44.017512 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:32:44.074744 950903 logs.go:123] Gathering logs for kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] ...
I0120 14:32:44.074778 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
I0120 14:32:44.129782 950903 logs.go:123] Gathering logs for container status ...
I0120 14:32:44.129812 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:32:44.177518 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:32:44.177547 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 14:32:44.177739 950903 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0120 14:32:44.177760 950903 out.go:270] Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:44.177785 950903 out.go:270] Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:44.177798 950903 out.go:270] Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:44.177805 950903 out.go:270] Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:44.177811 950903 out.go:270] Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
I0120 14:32:44.177818 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:32:44.177825 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:32:54.183032 950903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:32:54.196296 950903 api_server.go:72] duration metric: took 5m48.957681866s to wait for apiserver process to appear ...
I0120 14:32:54.196319 950903 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:32:54.196358 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:32:54.196418 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:32:54.237364 950903 cri.go:89] found id: "7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
I0120 14:32:54.237383 950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:32:54.237388 950903 cri.go:89] found id: ""
I0120 14:32:54.237395 950903 logs.go:282] 2 containers: [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63]
I0120 14:32:54.237452 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.241365 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.244944 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:32:54.245021 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:32:54.290562 950903 cri.go:89] found id: "260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
I0120 14:32:54.290585 950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:32:54.290590 950903 cri.go:89] found id: ""
I0120 14:32:54.290598 950903 logs.go:282] 2 containers: [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b]
I0120 14:32:54.290659 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.294510 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.298115 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:32:54.298194 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:32:54.343372 950903 cri.go:89] found id: "df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
I0120 14:32:54.343391 950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:32:54.343396 950903 cri.go:89] found id: ""
I0120 14:32:54.343403 950903 logs.go:282] 2 containers: [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d]
I0120 14:32:54.343464 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.349876 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.353487 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:32:54.353670 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:32:54.404374 950903 cri.go:89] found id: "901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
I0120 14:32:54.404402 950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:32:54.404407 950903 cri.go:89] found id: ""
I0120 14:32:54.404415 950903 logs.go:282] 2 containers: [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071]
I0120 14:32:54.404476 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.408537 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.412682 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:32:54.412783 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:32:54.460122 950903 cri.go:89] found id: "980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
I0120 14:32:54.460145 950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:32:54.460150 950903 cri.go:89] found id: ""
I0120 14:32:54.460158 950903 logs.go:282] 2 containers: [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25]
I0120 14:32:54.460215 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.464203 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.468701 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:32:54.468781 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:32:54.517365 950903 cri.go:89] found id: "cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
I0120 14:32:54.517389 950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:32:54.517394 950903 cri.go:89] found id: ""
I0120 14:32:54.517401 950903 logs.go:282] 2 containers: [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec]
I0120 14:32:54.517461 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.521673 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.525274 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:32:54.525351 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:32:54.571915 950903 cri.go:89] found id: "15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
I0120 14:32:54.571943 950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:32:54.571950 950903 cri.go:89] found id: ""
I0120 14:32:54.571957 950903 logs.go:282] 2 containers: [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f]
I0120 14:32:54.572019 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.576070 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.579794 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:32:54.579879 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:32:54.618519 950903 cri.go:89] found id: "0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
I0120 14:32:54.618588 950903 cri.go:89] found id: "46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
I0120 14:32:54.618600 950903 cri.go:89] found id: ""
I0120 14:32:54.618609 950903 logs.go:282] 2 containers: [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa]
I0120 14:32:54.618677 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.622286 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.625962 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:32:54.626082 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:32:54.665109 950903 cri.go:89] found id: "c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
I0120 14:32:54.665134 950903 cri.go:89] found id: ""
I0120 14:32:54.665143 950903 logs.go:282] 1 containers: [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6]
I0120 14:32:54.665201 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.668912 950903 logs.go:123] Gathering logs for kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] ...
I0120 14:32:54.668936 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
I0120 14:32:54.731588 950903 logs.go:123] Gathering logs for kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] ...
I0120 14:32:54.731623 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:32:54.798223 950903 logs.go:123] Gathering logs for kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] ...
I0120 14:32:54.798262 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
I0120 14:32:54.849667 950903 logs.go:123] Gathering logs for describe nodes ...
I0120 14:32:54.849699 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:32:55.017611 950903 logs.go:123] Gathering logs for kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] ...
I0120 14:32:55.017703 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
I0120 14:32:55.079897 950903 logs.go:123] Gathering logs for kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] ...
I0120 14:32:55.079935 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:32:55.127145 950903 logs.go:123] Gathering logs for kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] ...
I0120 14:32:55.127184 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
I0120 14:32:55.179168 950903 logs.go:123] Gathering logs for kubelet ...
I0120 14:32:55.179197 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 14:32:55.231529 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.690899 663 reflector.go:138] object-"kube-system"/"coredns-token-f95sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f95sh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.231791 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691117 663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232001 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691376 663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232213 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691453 663 reflector.go:138] object-"default"/"default-token-8wp7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8wp7x" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232424 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691503 663 reflector.go:138] object-"kube-system"/"kindnet-token-xx7dh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xx7dh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232643 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691562 663 reflector.go:138] object-"kube-system"/"kube-proxy-token-s6tbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s6tbt" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232867 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691635 663 reflector.go:138] object-"kube-system"/"metrics-server-token-dgscp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dgscp" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.233121 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.692028 663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mlrbf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-mlrbf" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.242036 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:27 old-k8s-version-140749 kubelet[663]: E0120 14:27:27.904251 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.242233 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:28 old-k8s-version-140749 kubelet[663]: E0120 14:27:28.466147 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.245063 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:41 old-k8s-version-140749 kubelet[663]: E0120 14:27:41.953783 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.246929 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:43 old-k8s-version-140749 kubelet[663]: E0120 14:27:43.761273 663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-xh79t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-xh79t" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.247464 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:55 old-k8s-version-140749 kubelet[663]: E0120 14:27:55.965627 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.248064 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:57 old-k8s-version-140749 kubelet[663]: E0120 14:27:57.605396 663 pod_workers.go:191] Error syncing pod e9c231b5-a5c1-498d-aa26-caf987208dc2 ("storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"
W0120 14:32:55.248529 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:58 old-k8s-version-140749 kubelet[663]: E0120 14:27:58.615334 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.248857 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:59 old-k8s-version-140749 kubelet[663]: E0120 14:27:59.633074 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.249550 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:05 old-k8s-version-140749 kubelet[663]: E0120 14:28:05.586265 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.252091 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:06 old-k8s-version-140749 kubelet[663]: E0120 14:28:06.962070 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.252545 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:17 old-k8s-version-140749 kubelet[663]: E0120 14:28:17.944316 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.253009 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:18 old-k8s-version-140749 kubelet[663]: E0120 14:28:18.685866 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.253399 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:25 old-k8s-version-140749 kubelet[663]: E0120 14:28:25.586275 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.253597 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:32 old-k8s-version-140749 kubelet[663]: E0120 14:28:32.945271 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.253925 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:37 old-k8s-version-140749 kubelet[663]: E0120 14:28:37.943400 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.254111 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:46 old-k8s-version-140749 kubelet[663]: E0120 14:28:46.943760 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.254695 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:51 old-k8s-version-140749 kubelet[663]: E0120 14:28:51.768837 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.255022 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:55 old-k8s-version-140749 kubelet[663]: E0120 14:28:55.585724 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.258008 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:00 old-k8s-version-140749 kubelet[663]: E0120 14:29:00.952397 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.258363 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:10 old-k8s-version-140749 kubelet[663]: E0120 14:29:10.942909 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.258551 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:11 old-k8s-version-140749 kubelet[663]: E0120 14:29:11.944209 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.258884 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:22 old-k8s-version-140749 kubelet[663]: E0120 14:29:22.951255 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.259078 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:23 old-k8s-version-140749 kubelet[663]: E0120 14:29:23.956425 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.259667 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:35 old-k8s-version-140749 kubelet[663]: E0120 14:29:35.903419 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.259852 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:36 old-k8s-version-140749 kubelet[663]: E0120 14:29:36.945915 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.260180 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:45 old-k8s-version-140749 kubelet[663]: E0120 14:29:45.585844 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.260364 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:51 old-k8s-version-140749 kubelet[663]: E0120 14:29:51.943550 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.260690 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:59 old-k8s-version-140749 kubelet[663]: E0120 14:29:59.943021 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.260876 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:05 old-k8s-version-140749 kubelet[663]: E0120 14:30:05.943119 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.261204 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:13 old-k8s-version-140749 kubelet[663]: E0120 14:30:13.942813 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.261391 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:17 old-k8s-version-140749 kubelet[663]: E0120 14:30:17.943166 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.261725 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.943282 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.264343 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.959102 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.264682 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:40 old-k8s-version-140749 kubelet[663]: E0120 14:30:40.946333 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.264869 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:43 old-k8s-version-140749 kubelet[663]: E0120 14:30:43.946388 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.265195 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:52 old-k8s-version-140749 kubelet[663]: E0120 14:30:52.943384 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.265378 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:57 old-k8s-version-140749 kubelet[663]: E0120 14:30:57.943462 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.265970 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:04 old-k8s-version-140749 kubelet[663]: E0120 14:31:04.184881 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.266300 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:05 old-k8s-version-140749 kubelet[663]: E0120 14:31:05.586278 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.266484 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:11 old-k8s-version-140749 kubelet[663]: E0120 14:31:11.943489 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.266811 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:19 old-k8s-version-140749 kubelet[663]: E0120 14:31:19.942873 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.266995 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.267180 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.267508 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.267693 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.268018 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.268202 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.268526 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.268851 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.269034 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.269217 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.269551 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.269743 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.270064 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.270393 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.270576 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 14:32:55.270586 950903 logs.go:123] Gathering logs for etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] ...
I0120 14:32:55.270600 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
I0120 14:32:55.318446 950903 logs.go:123] Gathering logs for coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] ...
I0120 14:32:55.318482 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
I0120 14:32:55.374342 950903 logs.go:123] Gathering logs for dmesg ...
I0120 14:32:55.374372 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:32:55.397751 950903 logs.go:123] Gathering logs for etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] ...
I0120 14:32:55.397781 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:32:55.441396 950903 logs.go:123] Gathering logs for kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] ...
I0120 14:32:55.441427 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:32:55.485012 950903 logs.go:123] Gathering logs for kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] ...
I0120 14:32:55.485049 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:32:55.538388 950903 logs.go:123] Gathering logs for storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] ...
I0120 14:32:55.538415 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
I0120 14:32:55.603551 950903 logs.go:123] Gathering logs for storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] ...
I0120 14:32:55.603583 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
I0120 14:32:55.653716 950903 logs.go:123] Gathering logs for kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] ...
I0120 14:32:55.653743 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
I0120 14:32:55.705317 950903 logs.go:123] Gathering logs for kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] ...
I0120 14:32:55.705344 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:32:55.761106 950903 logs.go:123] Gathering logs for coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] ...
I0120 14:32:55.761142 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:32:55.800636 950903 logs.go:123] Gathering logs for kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] ...
I0120 14:32:55.800666 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
I0120 14:32:55.845669 950903 logs.go:123] Gathering logs for containerd ...
I0120 14:32:55.845701 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:32:55.917760 950903 logs.go:123] Gathering logs for container status ...
I0120 14:32:55.917799 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:32:55.994852 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:32:55.994879 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 14:32:55.994927 950903 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0120 14:32:55.994945 950903 out.go:270] Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.994954 950903 out.go:270] Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.994966 950903 out.go:270] Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.994973 950903 out.go:270] Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.994985 950903 out.go:270] Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 14:32:55.994992 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:32:55.995007 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:33:05.995189 950903 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0120 14:33:06.005351 950903 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0120 14:33:06.009443 950903 out.go:201]
W0120 14:33:06.013033 950903 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0120 14:33:06.013087 950903 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0120 14:33:06.013119 950903 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0120 14:33:06.013130 950903 out.go:270] *
*
W0120 14:33:06.014124 950903 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0120 14:33:06.017802 950903 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-140749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-140749
helpers_test.go:235: (dbg) docker inspect old-k8s-version-140749:
-- stdout --
[
{
"Id": "b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135",
"Created": "2025-01-20T14:24:14.373777705Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 951200,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-01-20T14:26:57.317836884Z",
"FinishedAt": "2025-01-20T14:26:56.187336949Z"
},
"Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
"ResolvConfPath": "/var/lib/docker/containers/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135/hostname",
"HostsPath": "/var/lib/docker/containers/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135/hosts",
"LogPath": "/var/lib/docker/containers/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135-json.log",
"Name": "/old-k8s-version-140749",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-140749:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-140749",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/028860b59d8ea195b0fb1cfa13f453bb1f8e253fd630ca752a26ac065a860937-init/diff:/var/lib/docker/overlay2/59354dd32046d8588beaaa77dbeeb3a26843a7c570ae5e66a22312f5030cf994/diff",
"MergedDir": "/var/lib/docker/overlay2/028860b59d8ea195b0fb1cfa13f453bb1f8e253fd630ca752a26ac065a860937/merged",
"UpperDir": "/var/lib/docker/overlay2/028860b59d8ea195b0fb1cfa13f453bb1f8e253fd630ca752a26ac065a860937/diff",
"WorkDir": "/var/lib/docker/overlay2/028860b59d8ea195b0fb1cfa13f453bb1f8e253fd630ca752a26ac065a860937/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-140749",
"Source": "/var/lib/docker/volumes/old-k8s-version-140749/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-140749",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-140749",
"name.minikube.sigs.k8s.io": "old-k8s-version-140749",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f9a99c187cf4afeeb2ae3836d4a8f90eee55e8ebd52420897d9108ef7c986fcf",
"SandboxKey": "/var/run/docker/netns/f9a99c187cf4",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33829"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33830"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33833"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33831"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33832"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-140749": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null,
"NetworkID": "37f97352090f4bab13768da51fc0a8b4e0c2adb64e5d4d447c2ef43471e862ae",
"EndpointID": "7a2d86ae765090951475cad42a22f7da12479d57c6791bc691908f84471040b8",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-140749",
"b9e09679f407"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140749 -n old-k8s-version-140749
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-140749 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-140749 logs -n 25: (2.08502213s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p pause-853381 | pause-853381 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-071479 | force-systemd-env-071479 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-071479 | force-systemd-env-071479 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| start | -p cert-expiration-857413 | cert-expiration-857413 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| pause | -p pause-853381 | pause-853381 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| | --alsologtostderr -v=5 | | | | | |
| unpause | -p pause-853381 | pause-853381 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| | --alsologtostderr -v=5 | | | | | |
| pause | -p pause-853381 | pause-853381 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| | --alsologtostderr -v=5 | | | | | |
| delete | -p pause-853381 | pause-853381 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| | --alsologtostderr -v=5 | | | | | |
| delete | -p pause-853381 | pause-853381 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
| start | -p cert-options-968792 | cert-options-968792 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:24 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-968792 ssh | cert-options-968792 | jenkins | v1.35.0 | 20 Jan 25 14:24 UTC | 20 Jan 25 14:24 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-968792 -- sudo | cert-options-968792 | jenkins | v1.35.0 | 20 Jan 25 14:24 UTC | 20 Jan 25 14:24 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-968792 | cert-options-968792 | jenkins | v1.35.0 | 20 Jan 25 14:24 UTC | 20 Jan 25 14:24 UTC |
| start | -p old-k8s-version-140749 | old-k8s-version-140749 | jenkins | v1.35.0 | 20 Jan 25 14:24 UTC | 20 Jan 25 14:26 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p old-k8s-version-140749 | old-k8s-version-140749 | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-140749 | old-k8s-version-140749 | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
| | --alsologtostderr -v=3 | | | | | |
| start | -p cert-expiration-857413 | cert-expiration-857413 | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:27 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| addons | enable dashboard -p old-k8s-version-140749 | old-k8s-version-140749 | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-140749 | old-k8s-version-140749 | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| delete | -p cert-expiration-857413 | cert-expiration-857413 | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
| start | -p no-preload-193023 | no-preload-193023 | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:28 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
| addons | enable metrics-server -p no-preload-193023 | no-preload-193023 | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC | 20 Jan 25 14:28 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-193023 | no-preload-193023 | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC | 20 Jan 25 14:28 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-193023 | no-preload-193023 | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC | 20 Jan 25 14:28 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-193023 | no-preload-193023 | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/20 14:28:41
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.4 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0120 14:28:41.934452 959078 out.go:345] Setting OutFile to fd 1 ...
I0120 14:28:41.934649 959078 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:28:41.934681 959078 out.go:358] Setting ErrFile to fd 2...
I0120 14:28:41.934703 959078 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:28:41.934986 959078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
I0120 14:28:41.935422 959078 out.go:352] Setting JSON to false
I0120 14:28:41.936549 959078 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15067,"bootTime":1737368255,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0120 14:28:41.936659 959078 start.go:139] virtualization:
I0120 14:28:41.941796 959078 out.go:177] * [no-preload-193023] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0120 14:28:41.947838 959078 out.go:177] - MINIKUBE_LOCATION=20242
I0120 14:28:41.947902 959078 notify.go:220] Checking for updates...
I0120 14:28:41.954426 959078 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 14:28:41.958248 959078 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
I0120 14:28:41.961722 959078 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
I0120 14:28:41.965621 959078 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0120 14:28:41.968528 959078 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 14:28:41.971959 959078 config.go:182] Loaded profile config "no-preload-193023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:28:41.972547 959078 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 14:28:41.995838 959078 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
I0120 14:28:41.995969 959078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 14:28:42.058529 959078 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 14:28:42.048122249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 14:28:42.058656 959078 docker.go:318] overlay module found
I0120 14:28:42.061770 959078 out.go:177] * Using the docker driver based on existing profile
I0120 14:28:42.064660 959078 start.go:297] selected driver: docker
I0120 14:28:42.064724 959078 start.go:901] validating driver "docker" against &{Name:no-preload-193023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:28:42.064847 959078 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 14:28:42.065979 959078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 14:28:42.125791 959078 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 14:28:42.11450711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 14:28:42.126355 959078 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 14:28:42.126411 959078 cni.go:84] Creating CNI manager for ""
I0120 14:28:42.126461 959078 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 14:28:42.126512 959078 start.go:340] cluster config:
{Name:no-preload-193023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:28:42.134189 959078 out.go:177] * Starting "no-preload-193023" primary control-plane node in "no-preload-193023" cluster
I0120 14:28:42.137286 959078 cache.go:121] Beginning downloading kic base image for docker with containerd
I0120 14:28:42.142408 959078 out.go:177] * Pulling base image v0.0.46 ...
I0120 14:28:42.145528 959078 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 14:28:42.145661 959078 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0120 14:28:42.145778 959078 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/config.json ...
I0120 14:28:42.146265 959078 cache.go:107] acquiring lock: {Name:mk048b29a53f4d008c3052c3c6bc803c91b93e06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.146395 959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0120 14:28:42.146426 959078 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 155.479µs
I0120 14:28:42.146446 959078 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0120 14:28:42.146462 959078 cache.go:107] acquiring lock: {Name:mkb7aaee8835795c6c014c1ce05248e5184973f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.146480 959078 cache.go:107] acquiring lock: {Name:mkd49f8a3a7d8b62eaae6b30d36a72bc3f37b9c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.146504 959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
I0120 14:28:42.146511 959078 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 51.249µs
I0120 14:28:42.146517 959078 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
I0120 14:28:42.146528 959078 cache.go:107] acquiring lock: {Name:mkb21e7156b8a8154a7bb49366e1b58ab4b63c90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.146553 959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0 exists
I0120 14:28:42.146557 959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 exists
I0120 14:28:42.146563 959078 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0" took 36.792µs
I0120 14:28:42.146562 959078 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0" took 91.044µs
I0120 14:28:42.146571 959078 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0 succeeded
I0120 14:28:42.146578 959078 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 succeeded
I0120 14:28:42.146584 959078 cache.go:107] acquiring lock: {Name:mk15007d771510bcbb3138dab20c2214e874bda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.146588 959078 cache.go:107] acquiring lock: {Name:mk6b4a9537d68dccdb743907a9c87d1a89dd16d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.146620 959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0 exists
I0120 14:28:42.146624 959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I0120 14:28:42.146627 959078 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0" took 44.587µs
I0120 14:28:42.146631 959078 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 44.299µs
I0120 14:28:42.146639 959078 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I0120 14:28:42.146633 959078 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0 succeeded
I0120 14:28:42.146651 959078 cache.go:107] acquiring lock: {Name:mk04156d8a3480876042b13078b6a9d379533b16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.146657 959078 cache.go:107] acquiring lock: {Name:mkb58b2b584ebb6bcc71be907aa61ea8c3981782 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.146686 959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0 exists
I0120 14:28:42.146691 959078 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0" took 35.347µs
I0120 14:28:42.146697 959078 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0 succeeded
I0120 14:28:42.146786 959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0 exists
I0120 14:28:42.146799 959078 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0" took 153.084µs
I0120 14:28:42.146808 959078 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0 succeeded
I0120 14:28:42.146815 959078 cache.go:87] Successfully saved all images to host disk.
I0120 14:28:42.171787 959078 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0120 14:28:42.171824 959078 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0120 14:28:42.171842 959078 cache.go:227] Successfully downloaded all kic artifacts
I0120 14:28:42.171884 959078 start.go:360] acquireMachinesLock for no-preload-193023: {Name:mk47940fca7af88b855cfd6901e9b3ed9ca36828 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:28:42.171955 959078 start.go:364] duration metric: took 48.295µs to acquireMachinesLock for "no-preload-193023"
I0120 14:28:42.171985 959078 start.go:96] Skipping create...Using existing machine configuration
I0120 14:28:42.171998 959078 fix.go:54] fixHost starting:
I0120 14:28:42.172290 959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
I0120 14:28:42.202038 959078 fix.go:112] recreateIfNeeded on no-preload-193023: state=Stopped err=<nil>
W0120 14:28:42.202075 959078 fix.go:138] unexpected machine state, will restart: <nil>
I0120 14:28:42.205634 959078 out.go:177] * Restarting existing docker container for "no-preload-193023" ...
I0120 14:28:42.449722 950903 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:42.449748 950903 pod_ready.go:82] duration metric: took 1.008350501s for pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
I0120 14:28:42.449760 950903 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace to be "Ready" ...
I0120 14:28:44.455878 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:46.456063 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:42.210275 959078 cli_runner.go:164] Run: docker start no-preload-193023
I0120 14:28:42.593875 959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
I0120 14:28:42.616542 959078 kic.go:430] container "no-preload-193023" state is running.
I0120 14:28:42.616960 959078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-193023
I0120 14:28:42.643247 959078 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/config.json ...
I0120 14:28:42.643494 959078 machine.go:93] provisionDockerMachine start ...
I0120 14:28:42.643558 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:42.664656 959078 main.go:141] libmachine: Using SSH client type: native
I0120 14:28:42.664932 959078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33839 <nil> <nil>}
I0120 14:28:42.664943 959078 main.go:141] libmachine: About to run SSH command:
hostname
I0120 14:28:42.666926 959078 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0120 14:28:45.797211 959078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-193023
I0120 14:28:45.797247 959078 ubuntu.go:169] provisioning hostname "no-preload-193023"
I0120 14:28:45.797313 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:45.817153 959078 main.go:141] libmachine: Using SSH client type: native
I0120 14:28:45.817402 959078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33839 <nil> <nil>}
I0120 14:28:45.817418 959078 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-193023 && echo "no-preload-193023" | sudo tee /etc/hostname
I0120 14:28:45.968830 959078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-193023
I0120 14:28:45.968949 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:45.989557 959078 main.go:141] libmachine: Using SSH client type: native
I0120 14:28:45.989845 959078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33839 <nil> <nil>}
I0120 14:28:45.989871 959078 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-193023' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-193023/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-193023' | sudo tee -a /etc/hosts;
fi
fi
I0120 14:28:46.118033 959078 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 14:28:46.118062 959078 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20242-741865/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-741865/.minikube}
I0120 14:28:46.118083 959078 ubuntu.go:177] setting up certificates
I0120 14:28:46.118093 959078 provision.go:84] configureAuth start
I0120 14:28:46.118156 959078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-193023
I0120 14:28:46.135664 959078 provision.go:143] copyHostCerts
I0120 14:28:46.135731 959078 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem, removing ...
I0120 14:28:46.135740 959078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem
I0120 14:28:46.135817 959078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem (1078 bytes)
I0120 14:28:46.135919 959078 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem, removing ...
I0120 14:28:46.135924 959078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem
I0120 14:28:46.135950 959078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem (1123 bytes)
I0120 14:28:46.136013 959078 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem, removing ...
I0120 14:28:46.136017 959078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem
I0120 14:28:46.136040 959078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem (1679 bytes)
I0120 14:28:46.136096 959078 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem org=jenkins.no-preload-193023 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-193023]
I0120 14:28:46.648310 959078 provision.go:177] copyRemoteCerts
I0120 14:28:46.648393 959078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 14:28:46.648446 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:46.668332 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:46.763001 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 14:28:46.790698 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0120 14:28:46.816629 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 14:28:46.846580 959078 provision.go:87] duration metric: took 728.473299ms to configureAuth
I0120 14:28:46.846608 959078 ubuntu.go:193] setting minikube options for container-runtime
I0120 14:28:46.846825 959078 config.go:182] Loaded profile config "no-preload-193023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:28:46.846846 959078 machine.go:96] duration metric: took 4.203338307s to provisionDockerMachine
I0120 14:28:46.846857 959078 start.go:293] postStartSetup for "no-preload-193023" (driver="docker")
I0120 14:28:46.846868 959078 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 14:28:46.846928 959078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 14:28:46.846975 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:46.864260 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:46.965911 959078 ssh_runner.go:195] Run: cat /etc/os-release
I0120 14:28:46.969797 959078 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 14:28:46.969829 959078 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 14:28:46.969840 959078 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 14:28:46.969847 959078 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0120 14:28:46.969858 959078 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-741865/.minikube/addons for local assets ...
I0120 14:28:46.969913 959078 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-741865/.minikube/files for local assets ...
I0120 14:28:46.970002 959078 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem -> 7472562.pem in /etc/ssl/certs
I0120 14:28:46.970118 959078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 14:28:46.982497 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem --> /etc/ssl/certs/7472562.pem (1708 bytes)
I0120 14:28:47.013212 959078 start.go:296] duration metric: took 166.323315ms for postStartSetup
I0120 14:28:47.013309 959078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 14:28:47.013375 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:47.033830 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:47.119356 959078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0120 14:28:47.124063 959078 fix.go:56] duration metric: took 4.952056491s for fixHost
I0120 14:28:47.124090 959078 start.go:83] releasing machines lock for "no-preload-193023", held for 4.952121509s
I0120 14:28:47.124164 959078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-193023
I0120 14:28:47.141660 959078 ssh_runner.go:195] Run: cat /version.json
I0120 14:28:47.141675 959078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 14:28:47.141719 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:47.141748 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:47.161506 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:47.163155 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:47.253186 959078 ssh_runner.go:195] Run: systemctl --version
I0120 14:28:47.412717 959078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0120 14:28:47.417121 959078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0120 14:28:47.437365 959078 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0120 14:28:47.437450 959078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 14:28:47.446391 959078 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0120 14:28:47.446415 959078 start.go:495] detecting cgroup driver to use...
I0120 14:28:47.446447 959078 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0120 14:28:47.446507 959078 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 14:28:47.463661 959078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 14:28:47.475659 959078 docker.go:217] disabling cri-docker service (if available) ...
I0120 14:28:47.475758 959078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 14:28:47.489412 959078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 14:28:47.502116 959078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 14:28:47.587426 959078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 14:28:47.680557 959078 docker.go:233] disabling docker service ...
I0120 14:28:47.680629 959078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 14:28:47.695122 959078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 14:28:47.707679 959078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 14:28:47.810797 959078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 14:28:47.894497 959078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 14:28:47.906280 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 14:28:47.922898 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0120 14:28:47.934981 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 14:28:47.944796 959078 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 14:28:47.944882 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 14:28:47.960339 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 14:28:47.972162 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 14:28:47.984379 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 14:28:47.995362 959078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 14:28:48.005518 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 14:28:48.018222 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0120 14:28:48.030511 959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0120 14:28:48.043542 959078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 14:28:48.053928 959078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 14:28:48.064076 959078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:28:48.162576 959078 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 14:28:48.334577 959078 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 14:28:48.334664 959078 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 14:28:48.347835 959078 start.go:563] Will wait 60s for crictl version
I0120 14:28:48.347938 959078 ssh_runner.go:195] Run: which crictl
I0120 14:28:48.352844 959078 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 14:28:48.395647 959078 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0120 14:28:48.395733 959078 ssh_runner.go:195] Run: containerd --version
I0120 14:28:48.420985 959078 ssh_runner.go:195] Run: containerd --version
I0120 14:28:48.456099 959078 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.24 ...
I0120 14:28:48.459224 959078 cli_runner.go:164] Run: docker network inspect no-preload-193023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 14:28:48.475876 959078 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0120 14:28:48.479645 959078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 14:28:48.491349 959078 kubeadm.go:883] updating cluster {Name:no-preload-193023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 14:28:48.491488 959078 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 14:28:48.491544 959078 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 14:28:48.533247 959078 containerd.go:627] all images are preloaded for containerd runtime.
I0120 14:28:48.533280 959078 cache_images.go:84] Images are preloaded, skipping loading
I0120 14:28:48.533291 959078 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 containerd true true} ...
I0120 14:28:48.533390 959078 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-193023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 14:28:48.533458 959078 ssh_runner.go:195] Run: sudo crictl info
I0120 14:28:48.578112 959078 cni.go:84] Creating CNI manager for ""
I0120 14:28:48.578140 959078 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 14:28:48.578152 959078 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 14:28:48.578198 959078 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-193023 NodeName:no-preload-193023 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0120 14:28:48.578350 959078 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-193023"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 14:28:48.578434 959078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I0120 14:28:48.589680 959078 binaries.go:44] Found k8s binaries, skipping transfer
I0120 14:28:48.589755 959078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 14:28:48.598748 959078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I0120 14:28:48.617702 959078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 14:28:48.637865 959078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
I0120 14:28:48.657128 959078 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0120 14:28:48.660887 959078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 14:28:48.672098 959078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:28:48.767168 959078 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:28:48.782013 959078 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023 for IP: 192.168.76.2
I0120 14:28:48.782033 959078 certs.go:194] generating shared ca certs ...
I0120 14:28:48.782049 959078 certs.go:226] acquiring lock for ca certs: {Name:mka7a6ccd7d8b5f47789c70c8e6dc479acdcdb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:28:48.782194 959078 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.key
I0120 14:28:48.782237 959078 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.key
I0120 14:28:48.782244 959078 certs.go:256] generating profile certs ...
I0120 14:28:48.782331 959078 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.key
I0120 14:28:48.782397 959078 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/apiserver.key.0e8d29cc
I0120 14:28:48.782436 959078 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/proxy-client.key
I0120 14:28:48.782549 959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256.pem (1338 bytes)
W0120 14:28:48.782578 959078 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256_empty.pem, impossibly tiny 0 bytes
I0120 14:28:48.782586 959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem (1679 bytes)
I0120 14:28:48.782611 959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem (1078 bytes)
I0120 14:28:48.782633 959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem (1123 bytes)
I0120 14:28:48.782654 959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem (1679 bytes)
I0120 14:28:48.782696 959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem (1708 bytes)
I0120 14:28:48.783319 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 14:28:48.813102 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 14:28:48.840210 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 14:28:48.874022 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0120 14:28:48.910896 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0120 14:28:48.962748 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0120 14:28:49.026063 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 14:28:49.062008 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0120 14:28:49.089566 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem --> /usr/share/ca-certificates/7472562.pem (1708 bytes)
I0120 14:28:49.117075 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 14:28:49.150625 959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256.pem --> /usr/share/ca-certificates/747256.pem (1338 bytes)
I0120 14:28:49.176558 959078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 14:28:49.195049 959078 ssh_runner.go:195] Run: openssl version
I0120 14:28:49.202402 959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7472562.pem && ln -fs /usr/share/ca-certificates/7472562.pem /etc/ssl/certs/7472562.pem"
I0120 14:28:49.212757 959078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7472562.pem
I0120 14:28:49.216539 959078 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 13:48 /usr/share/ca-certificates/7472562.pem
I0120 14:28:49.216609 959078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7472562.pem
I0120 14:28:49.224654 959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7472562.pem /etc/ssl/certs/3ec20f2e.0"
I0120 14:28:49.234055 959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 14:28:49.243912 959078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 14:28:49.248090 959078 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 13:39 /usr/share/ca-certificates/minikubeCA.pem
I0120 14:28:49.248191 959078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 14:28:49.255437 959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 14:28:49.264759 959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/747256.pem && ln -fs /usr/share/ca-certificates/747256.pem /etc/ssl/certs/747256.pem"
I0120 14:28:49.274641 959078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/747256.pem
I0120 14:28:49.278567 959078 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 13:48 /usr/share/ca-certificates/747256.pem
I0120 14:28:49.278637 959078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/747256.pem
I0120 14:28:49.285809 959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/747256.pem /etc/ssl/certs/51391683.0"
I0120 14:28:49.295368 959078 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 14:28:49.299196 959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0120 14:28:49.306411 959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0120 14:28:49.313425 959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0120 14:28:49.320497 959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0120 14:28:49.327945 959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0120 14:28:49.335613 959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0120 14:28:49.343175 959078 kubeadm.go:392] StartCluster: {Name:no-preload-193023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:28:49.343280 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 14:28:49.343361 959078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 14:28:49.387893 959078 cri.go:89] found id: "d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
I0120 14:28:49.387924 959078 cri.go:89] found id: "5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
I0120 14:28:49.387931 959078 cri.go:89] found id: "209a98ebe1a7a0dbea3c6eecf2c4710020cb40136d3fb46c485448c0bd63dd5c"
I0120 14:28:49.387944 959078 cri.go:89] found id: "057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
I0120 14:28:49.387948 959078 cri.go:89] found id: "3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
I0120 14:28:49.387952 959078 cri.go:89] found id: "8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
I0120 14:28:49.387955 959078 cri.go:89] found id: "949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
I0120 14:28:49.387959 959078 cri.go:89] found id: "3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
I0120 14:28:49.387962 959078 cri.go:89] found id: ""
I0120 14:28:49.388023 959078 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0120 14:28:49.410583 959078 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T14:28:49Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0120 14:28:49.410720 959078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 14:28:49.422690 959078 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0120 14:28:49.422712 959078 kubeadm.go:593] restartPrimaryControlPlane start ...
I0120 14:28:49.422766 959078 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0120 14:28:49.442635 959078 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0120 14:28:49.443231 959078 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-193023" does not appear in /home/jenkins/minikube-integration/20242-741865/kubeconfig
I0120 14:28:49.443499 959078 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-741865/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-193023" cluster setting kubeconfig missing "no-preload-193023" context setting]
I0120 14:28:49.443982 959078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-741865/kubeconfig: {Name:mkcf7578b1c91d60616ac7150d8566b28a92e8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:28:49.445422 959078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0120 14:28:49.466001 959078 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0120 14:28:49.466036 959078 kubeadm.go:597] duration metric: took 43.316809ms to restartPrimaryControlPlane
I0120 14:28:49.466046 959078 kubeadm.go:394] duration metric: took 122.881048ms to StartCluster
I0120 14:28:49.466061 959078 settings.go:142] acquiring lock: {Name:mkf7c5865cae55b4373a466e1a24783d8090ef1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:28:49.466127 959078 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20242-741865/kubeconfig
I0120 14:28:49.467087 959078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-741865/kubeconfig: {Name:mkcf7578b1c91d60616ac7150d8566b28a92e8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:28:49.467342 959078 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 14:28:49.467645 959078 config.go:182] Loaded profile config "no-preload-193023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:28:49.467688 959078 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 14:28:49.467754 959078 addons.go:69] Setting storage-provisioner=true in profile "no-preload-193023"
I0120 14:28:49.467777 959078 addons.go:238] Setting addon storage-provisioner=true in "no-preload-193023"
I0120 14:28:49.467776 959078 addons.go:69] Setting default-storageclass=true in profile "no-preload-193023"
I0120 14:28:49.467787 959078 addons.go:69] Setting metrics-server=true in profile "no-preload-193023"
I0120 14:28:49.467796 959078 addons.go:238] Setting addon metrics-server=true in "no-preload-193023"
I0120 14:28:49.467798 959078 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-193023"
W0120 14:28:49.467801 959078 addons.go:247] addon metrics-server should already be in state true
I0120 14:28:49.467824 959078 host.go:66] Checking if "no-preload-193023" exists ...
I0120 14:28:49.468130 959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
I0120 14:28:49.468291 959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
W0120 14:28:49.467783 959078 addons.go:247] addon storage-provisioner should already be in state true
I0120 14:28:49.470699 959078 host.go:66] Checking if "no-preload-193023" exists ...
I0120 14:28:49.472122 959078 addons.go:69] Setting dashboard=true in profile "no-preload-193023"
I0120 14:28:49.472279 959078 addons.go:238] Setting addon dashboard=true in "no-preload-193023"
W0120 14:28:49.472310 959078 addons.go:247] addon dashboard should already be in state true
I0120 14:28:49.472365 959078 host.go:66] Checking if "no-preload-193023" exists ...
I0120 14:28:49.473336 959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
I0120 14:28:49.478349 959078 out.go:177] * Verifying Kubernetes components...
I0120 14:28:49.478663 959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
I0120 14:28:49.486605 959078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:28:49.531345 959078 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 14:28:49.532665 959078 addons.go:238] Setting addon default-storageclass=true in "no-preload-193023"
W0120 14:28:49.532720 959078 addons.go:247] addon default-storageclass should already be in state true
I0120 14:28:49.532748 959078 host.go:66] Checking if "no-preload-193023" exists ...
I0120 14:28:49.533289 959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
I0120 14:28:49.534746 959078 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 14:28:49.534777 959078 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 14:28:49.534838 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:49.561080 959078 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 14:28:49.561083 959078 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 14:28:49.564066 959078 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:28:49.564093 959078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 14:28:49.564161 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:49.567270 959078 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 14:28:48.456951 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:50.982318 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:49.570157 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 14:28:49.570188 959078 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 14:28:49.570267 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:49.596846 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:49.628235 959078 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 14:28:49.628259 959078 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 14:28:49.628338 959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
I0120 14:28:49.642259 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:49.667798 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:49.685737 959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
I0120 14:28:49.769073 959078 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:28:49.894409 959078 node_ready.go:35] waiting up to 6m0s for node "no-preload-193023" to be "Ready" ...
I0120 14:28:49.903776 959078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:28:49.920169 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 14:28:49.920192 959078 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 14:28:49.958063 959078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 14:28:49.991758 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 14:28:49.991860 959078 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 14:28:50.058449 959078 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 14:28:50.058527 959078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 14:28:50.247810 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 14:28:50.247891 959078 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0120 14:28:50.252097 959078 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 14:28:50.252178 959078 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 14:28:50.396278 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 14:28:50.396352 959078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 14:28:50.553054 959078 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:28:50.553152 959078 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 14:28:50.639266 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 14:28:50.639365 959078 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0120 14:28:50.658647 959078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:28:50.706576 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 14:28:50.706654 959078 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 14:28:50.801743 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 14:28:50.801821 959078 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 14:28:50.878748 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 14:28:50.878827 959078 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0120 14:28:50.937360 959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:28:50.937435 959078 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0120 14:28:51.040054 959078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:28:53.457118 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:55.457832 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:55.266392 959078 node_ready.go:49] node "no-preload-193023" has status "Ready":"True"
I0120 14:28:55.266419 959078 node_ready.go:38] duration metric: took 5.371925706s for node "no-preload-193023" to be "Ready" ...
I0120 14:28:55.266431 959078 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:28:55.334254 959078 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g577w" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.481179 959078 pod_ready.go:93] pod "coredns-668d6bf9bc-g577w" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:55.481263 959078 pod_ready.go:82] duration metric: took 146.914733ms for pod "coredns-668d6bf9bc-g577w" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.481290 959078 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-193023" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.531555 959078 pod_ready.go:93] pod "etcd-no-preload-193023" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:55.531633 959078 pod_ready.go:82] duration metric: took 50.304596ms for pod "etcd-no-preload-193023" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.531664 959078 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-193023" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.549776 959078 pod_ready.go:93] pod "kube-apiserver-no-preload-193023" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:55.549852 959078 pod_ready.go:82] duration metric: took 18.164852ms for pod "kube-apiserver-no-preload-193023" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.549879 959078 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-193023" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.571955 959078 pod_ready.go:93] pod "kube-controller-manager-no-preload-193023" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:55.572030 959078 pod_ready.go:82] duration metric: took 22.129003ms for pod "kube-controller-manager-no-preload-193023" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.572077 959078 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z8rcv" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.581026 959078 pod_ready.go:93] pod "kube-proxy-z8rcv" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:55.581103 959078 pod_ready.go:82] duration metric: took 8.999422ms for pod "kube-proxy-z8rcv" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.581130 959078 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-193023" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.887639 959078 pod_ready.go:93] pod "kube-scheduler-no-preload-193023" in "kube-system" namespace has status "Ready":"True"
I0120 14:28:55.887663 959078 pod_ready.go:82] duration metric: took 306.512834ms for pod "kube-scheduler-no-preload-193023" in "kube-system" namespace to be "Ready" ...
I0120 14:28:55.887676 959078 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace to be "Ready" ...
I0120 14:28:57.907239 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:58.940210 959078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.036387851s)
I0120 14:28:58.940267 959078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.982183186s)
I0120 14:28:58.940496 959078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.281772043s)
I0120 14:28:58.940520 959078 addons.go:479] Verifying addon metrics-server=true in "no-preload-193023"
I0120 14:28:59.015410 959078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.975267561s)
I0120 14:28:59.017672 959078 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-193023 addons enable metrics-server
I0120 14:28:59.020744 959078 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
I0120 14:28:57.957293 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:00.457254 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:28:59.023663 959078 addons.go:514] duration metric: took 9.555963446s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
I0120 14:29:00.395416 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:02.957338 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:05.457281 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:02.903173 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:05.395728 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:07.956788 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:10.455727 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:07.896086 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:10.394655 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:12.455805 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:14.455919 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:12.894094 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:15.393412 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:16.956594 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:18.957055 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:20.957177 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:17.394225 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:19.893727 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:21.893836 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:22.977897 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:25.456100 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:24.399007 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:26.893504 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:27.956285 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:29.957097 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:28.893812 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:30.894410 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:31.958545 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:34.520016 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:33.393814 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:35.394329 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:36.958082 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:39.455827 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:41.456412 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:37.394592 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:39.894028 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:41.896110 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:43.465069 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:45.956385 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:44.396749 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:46.894839 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:48.456073 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:50.957169 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:48.895226 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:51.394693 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:53.456920 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:55.460163 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:53.395055 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:55.894809 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:57.956176 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:59.957071 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:29:57.895103 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:00.395066 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:01.967055 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:04.456138 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:02.395939 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:04.895164 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:06.956305 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:08.956902 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:11.455925 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:07.394018 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:09.894498 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:13.956200 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:15.956651 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:12.394245 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:14.394553 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:16.894092 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:17.956978 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:19.957565 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:18.894776 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:20.894930 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:22.456276 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:24.970006 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:22.895024 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:25.393928 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:27.456774 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:29.463141 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:27.893755 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:29.894301 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:31.894673 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:31.957178 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:34.455767 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:34.394388 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:36.394438 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:36.956611 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:39.456640 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:38.893795 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:40.895452 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:41.956918 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:43.973328 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:46.455494 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:43.393821 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:45.394862 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:48.455765 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:50.456716 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:47.395240 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:49.894996 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:52.956367 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:54.956544 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:52.394309 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:54.893802 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:56.893865 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:57.457408 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:59.955937 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:30:58.895334 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:01.394470 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:01.957235 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:03.958142 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:06.461136 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:03.894290 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:06.393508 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:08.956483 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:10.956661 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:08.396251 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:10.895316 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:13.456294 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:15.456562 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:13.393381 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:15.394188 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:17.955801 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:19.956567 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:17.894291 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:20.394280 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:21.956906 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:23.956980 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:26.458636 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:22.394363 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:24.895061 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:28.957544 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:31.456217 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:27.393843 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:29.394069 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:31.394854 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:33.956541 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:36.456146 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:33.395061 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:35.894478 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:38.456341 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:40.456681 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:38.394475 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:40.893832 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:42.955903 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:44.956479 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:42.894112 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:44.894511 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:47.456415 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:49.956064 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:47.394133 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:49.394619 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:51.395046 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:51.956572 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:54.456227 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:56.456785 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:53.894916 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:56.393831 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:58.956968 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:00.957085 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:31:58.894685 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:01.393485 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:02.957264 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:04.962625 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:03.393802 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:05.895359 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:07.455559 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:09.455774 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:11.456500 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:08.394166 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:10.894733 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:13.956820 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:16.025898 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:13.394534 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:15.893513 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:18.457623 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:20.957089 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:17.894547 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:20.393947 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:23.456405 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:25.955753 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:22.394706 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:24.894931 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:28.456663 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:30.463692 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:27.393756 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:29.394726 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:31.894215 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:32.956881 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:34.956937 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:34.394675 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:36.894091 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:36.960987 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:39.456248 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:41.456476 950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:39.394473 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:41.394782 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:42.456347 950903 pod_ready.go:82] duration metric: took 4m0.0065748s for pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace to be "Ready" ...
E0120 14:32:42.456373 950903 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 14:32:42.456384 950903 pod_ready.go:39] duration metric: took 5m18.75110665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:32:42.456400 950903 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:32:42.456430 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:32:42.456494 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:32:42.495561 950903 cri.go:89] found id: "7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
I0120 14:32:42.495581 950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:32:42.495586 950903 cri.go:89] found id: ""
I0120 14:32:42.495593 950903 logs.go:282] 2 containers: [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63]
I0120 14:32:42.495650 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.499420 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.502920 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:32:42.503009 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:32:42.542022 950903 cri.go:89] found id: "260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
I0120 14:32:42.542087 950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:32:42.542106 950903 cri.go:89] found id: ""
I0120 14:32:42.542131 950903 logs.go:282] 2 containers: [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b]
I0120 14:32:42.542221 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.546159 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.549559 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:32:42.549707 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:32:42.588844 950903 cri.go:89] found id: "df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
I0120 14:32:42.588910 950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:32:42.588931 950903 cri.go:89] found id: ""
I0120 14:32:42.588965 950903 logs.go:282] 2 containers: [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d]
I0120 14:32:42.589060 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.593064 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.596734 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:32:42.596827 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:32:42.637742 950903 cri.go:89] found id: "901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
I0120 14:32:42.637766 950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:32:42.637772 950903 cri.go:89] found id: ""
I0120 14:32:42.637779 950903 logs.go:282] 2 containers: [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071]
I0120 14:32:42.637837 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.641531 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.645214 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:32:42.645294 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:32:42.694848 950903 cri.go:89] found id: "980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
I0120 14:32:42.694873 950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:32:42.694878 950903 cri.go:89] found id: ""
I0120 14:32:42.694885 950903 logs.go:282] 2 containers: [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25]
I0120 14:32:42.694944 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.698884 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.702523 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:32:42.702604 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:32:42.744000 950903 cri.go:89] found id: "cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
I0120 14:32:42.744031 950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:32:42.744037 950903 cri.go:89] found id: ""
I0120 14:32:42.744045 950903 logs.go:282] 2 containers: [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec]
I0120 14:32:42.744145 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.748068 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.751593 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:32:42.751671 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:32:42.788738 950903 cri.go:89] found id: "15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
I0120 14:32:42.788761 950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:32:42.788766 950903 cri.go:89] found id: ""
I0120 14:32:42.788773 950903 logs.go:282] 2 containers: [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f]
I0120 14:32:42.788833 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.792694 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.796248 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:32:42.796327 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:32:42.835380 950903 cri.go:89] found id: "c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
I0120 14:32:42.835402 950903 cri.go:89] found id: ""
I0120 14:32:42.835411 950903 logs.go:282] 1 containers: [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6]
I0120 14:32:42.835470 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.839424 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:32:42.839588 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:32:42.886867 950903 cri.go:89] found id: "0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
I0120 14:32:42.886943 950903 cri.go:89] found id: "46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
I0120 14:32:42.886963 950903 cri.go:89] found id: ""
I0120 14:32:42.886990 950903 logs.go:282] 2 containers: [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa]
I0120 14:32:42.887084 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.892761 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:42.897255 950903 logs.go:123] Gathering logs for dmesg ...
I0120 14:32:42.897281 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:32:42.915606 950903 logs.go:123] Gathering logs for describe nodes ...
I0120 14:32:42.915635 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:32:43.086993 950903 logs.go:123] Gathering logs for etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] ...
I0120 14:32:43.087027 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
I0120 14:32:43.137045 950903 logs.go:123] Gathering logs for coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] ...
I0120 14:32:43.137078 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:32:43.177316 950903 logs.go:123] Gathering logs for kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] ...
I0120 14:32:43.177346 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:32:43.226521 950903 logs.go:123] Gathering logs for kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] ...
I0120 14:32:43.226552 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
I0120 14:32:43.277166 950903 logs.go:123] Gathering logs for containerd ...
I0120 14:32:43.277198 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:32:43.350057 950903 logs.go:123] Gathering logs for kubelet ...
I0120 14:32:43.350162 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 14:32:43.415129 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.690899 663 reflector.go:138] object-"kube-system"/"coredns-token-f95sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f95sh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.415423 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691117 663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.415671 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691376 663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.415917 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691453 663 reflector.go:138] object-"default"/"default-token-8wp7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8wp7x" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.416155 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691503 663 reflector.go:138] object-"kube-system"/"kindnet-token-xx7dh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xx7dh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.416381 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691562 663 reflector.go:138] object-"kube-system"/"kube-proxy-token-s6tbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s6tbt" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.416607 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691635 663 reflector.go:138] object-"kube-system"/"metrics-server-token-dgscp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dgscp" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.416848 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.692028 663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mlrbf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-mlrbf" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.425962 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:27 old-k8s-version-140749 kubelet[663]: E0120 14:27:27.904251 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.426161 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:28 old-k8s-version-140749 kubelet[663]: E0120 14:27:28.466147 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.428994 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:41 old-k8s-version-140749 kubelet[663]: E0120 14:27:41.953783 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.430807 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:43 old-k8s-version-140749 kubelet[663]: E0120 14:27:43.761273 663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-xh79t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-xh79t" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:43.431350 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:55 old-k8s-version-140749 kubelet[663]: E0120 14:27:55.965627 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.432061 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:57 old-k8s-version-140749 kubelet[663]: E0120 14:27:57.605396 663 pod_workers.go:191] Error syncing pod e9c231b5-a5c1-498d-aa26-caf987208dc2 ("storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"
W0120 14:32:43.432532 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:58 old-k8s-version-140749 kubelet[663]: E0120 14:27:58.615334 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.432875 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:59 old-k8s-version-140749 kubelet[663]: E0120 14:27:59.633074 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.433786 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:05 old-k8s-version-140749 kubelet[663]: E0120 14:28:05.586265 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.436345 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:06 old-k8s-version-140749 kubelet[663]: E0120 14:28:06.962070 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.436803 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:17 old-k8s-version-140749 kubelet[663]: E0120 14:28:17.944316 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.437380 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:18 old-k8s-version-140749 kubelet[663]: E0120 14:28:18.685866 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.437740 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:25 old-k8s-version-140749 kubelet[663]: E0120 14:28:25.586275 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.437928 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:32 old-k8s-version-140749 kubelet[663]: E0120 14:28:32.945271 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.438346 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:37 old-k8s-version-140749 kubelet[663]: E0120 14:28:37.943400 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.438539 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:46 old-k8s-version-140749 kubelet[663]: E0120 14:28:46.943760 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.439145 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:51 old-k8s-version-140749 kubelet[663]: E0120 14:28:51.768837 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.439485 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:55 old-k8s-version-140749 kubelet[663]: E0120 14:28:55.585724 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.442279 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:00 old-k8s-version-140749 kubelet[663]: E0120 14:29:00.952397 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.442626 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:10 old-k8s-version-140749 kubelet[663]: E0120 14:29:10.942909 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.442824 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:11 old-k8s-version-140749 kubelet[663]: E0120 14:29:11.944209 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.443346 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:22 old-k8s-version-140749 kubelet[663]: E0120 14:29:22.951255 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.443537 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:23 old-k8s-version-140749 kubelet[663]: E0120 14:29:23.956425 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.444140 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:35 old-k8s-version-140749 kubelet[663]: E0120 14:29:35.903419 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.444330 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:36 old-k8s-version-140749 kubelet[663]: E0120 14:29:36.945915 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.444679 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:45 old-k8s-version-140749 kubelet[663]: E0120 14:29:45.585844 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.444873 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:51 old-k8s-version-140749 kubelet[663]: E0120 14:29:51.943550 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.445206 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:59 old-k8s-version-140749 kubelet[663]: E0120 14:29:59.943021 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.445390 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:05 old-k8s-version-140749 kubelet[663]: E0120 14:30:05.943119 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.445746 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:13 old-k8s-version-140749 kubelet[663]: E0120 14:30:13.942813 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.445986 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:17 old-k8s-version-140749 kubelet[663]: E0120 14:30:17.943166 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.446323 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.943282 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.451516 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.959102 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:43.451888 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:40 old-k8s-version-140749 kubelet[663]: E0120 14:30:40.946333 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.452090 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:43 old-k8s-version-140749 kubelet[663]: E0120 14:30:43.946388 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.452419 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:52 old-k8s-version-140749 kubelet[663]: E0120 14:30:52.943384 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.452606 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:57 old-k8s-version-140749 kubelet[663]: E0120 14:30:57.943462 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.453215 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:04 old-k8s-version-140749 kubelet[663]: E0120 14:31:04.184881 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.453555 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:05 old-k8s-version-140749 kubelet[663]: E0120 14:31:05.586278 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.453747 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:11 old-k8s-version-140749 kubelet[663]: E0120 14:31:11.943489 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.454085 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:19 old-k8s-version-140749 kubelet[663]: E0120 14:31:19.942873 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.454273 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.454460 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.454796 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.454982 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.455332 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.455568 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.455909 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.456251 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.456438 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.456624 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.456954 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:43.457154 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:43.457520 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
I0120 14:32:43.457532 950903 logs.go:123] Gathering logs for kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] ...
I0120 14:32:43.457547 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:32:43.513402 950903 logs.go:123] Gathering logs for kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] ...
I0120 14:32:43.513432 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
I0120 14:32:43.575002 950903 logs.go:123] Gathering logs for kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] ...
I0120 14:32:43.575049 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:32:43.635251 950903 logs.go:123] Gathering logs for storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] ...
I0120 14:32:43.635291 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
I0120 14:32:43.679772 950903 logs.go:123] Gathering logs for kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] ...
I0120 14:32:43.679802 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
I0120 14:32:43.725126 950903 logs.go:123] Gathering logs for kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] ...
I0120 14:32:43.725160 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
I0120 14:32:43.764221 950903 logs.go:123] Gathering logs for storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] ...
I0120 14:32:43.764246 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
I0120 14:32:43.803933 950903 logs.go:123] Gathering logs for kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] ...
I0120 14:32:43.803963 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
I0120 14:32:43.865136 950903 logs.go:123] Gathering logs for kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] ...
I0120 14:32:43.865173 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:32:43.927846 950903 logs.go:123] Gathering logs for etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] ...
I0120 14:32:43.927885 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:32:43.976062 950903 logs.go:123] Gathering logs for coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] ...
I0120 14:32:43.976150 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
I0120 14:32:44.017480 950903 logs.go:123] Gathering logs for kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] ...
I0120 14:32:44.017512 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:32:44.074744 950903 logs.go:123] Gathering logs for kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] ...
I0120 14:32:44.074778 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
I0120 14:32:44.129782 950903 logs.go:123] Gathering logs for container status ...
I0120 14:32:44.129812 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:32:44.177518 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:32:44.177547 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 14:32:44.177739 950903 out.go:270] X Problems detected in kubelet:
W0120 14:32:44.177760 950903 out.go:270] Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:44.177785 950903 out.go:270] Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:44.177798 950903 out.go:270] Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:44.177805 950903 out.go:270] Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:44.177811 950903 out.go:270] Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
I0120 14:32:44.177818 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:32:44.177825 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:32:43.395313 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:45.894471 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:48.397393 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:50.893995 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:54.183032 950903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:32:54.196296 950903 api_server.go:72] duration metric: took 5m48.957681866s to wait for apiserver process to appear ...
I0120 14:32:54.196319 950903 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:32:54.196358 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:32:54.196418 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:32:54.237364 950903 cri.go:89] found id: "7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
I0120 14:32:54.237383 950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:32:54.237388 950903 cri.go:89] found id: ""
I0120 14:32:54.237395 950903 logs.go:282] 2 containers: [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63]
I0120 14:32:54.237452 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.241365 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.244944 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:32:54.245021 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:32:54.290562 950903 cri.go:89] found id: "260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
I0120 14:32:54.290585 950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:32:54.290590 950903 cri.go:89] found id: ""
I0120 14:32:54.290598 950903 logs.go:282] 2 containers: [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b]
I0120 14:32:54.290659 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.294510 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.298115 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:32:54.298194 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:32:54.343372 950903 cri.go:89] found id: "df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
I0120 14:32:54.343391 950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:32:54.343396 950903 cri.go:89] found id: ""
I0120 14:32:54.343403 950903 logs.go:282] 2 containers: [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d]
I0120 14:32:54.343464 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.349876 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.353487 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:32:54.353670 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:32:54.404374 950903 cri.go:89] found id: "901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
I0120 14:32:54.404402 950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:32:54.404407 950903 cri.go:89] found id: ""
I0120 14:32:54.404415 950903 logs.go:282] 2 containers: [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071]
I0120 14:32:54.404476 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.408537 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.412682 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:32:54.412783 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:32:54.460122 950903 cri.go:89] found id: "980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
I0120 14:32:54.460145 950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:32:54.460150 950903 cri.go:89] found id: ""
I0120 14:32:54.460158 950903 logs.go:282] 2 containers: [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25]
I0120 14:32:54.460215 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.464203 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.468701 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:32:54.468781 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:32:54.517365 950903 cri.go:89] found id: "cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
I0120 14:32:54.517389 950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:32:54.517394 950903 cri.go:89] found id: ""
I0120 14:32:54.517401 950903 logs.go:282] 2 containers: [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec]
I0120 14:32:54.517461 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.521673 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.525274 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:32:54.525351 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:32:54.571915 950903 cri.go:89] found id: "15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
I0120 14:32:54.571943 950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:32:54.571950 950903 cri.go:89] found id: ""
I0120 14:32:54.571957 950903 logs.go:282] 2 containers: [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f]
I0120 14:32:54.572019 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.576070 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.579794 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:32:54.579879 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:32:54.618519 950903 cri.go:89] found id: "0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
I0120 14:32:54.618588 950903 cri.go:89] found id: "46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
I0120 14:32:54.618600 950903 cri.go:89] found id: ""
I0120 14:32:54.618609 950903 logs.go:282] 2 containers: [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa]
I0120 14:32:54.618677 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.622286 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.625962 950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:32:54.626082 950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:32:54.665109 950903 cri.go:89] found id: "c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
I0120 14:32:54.665134 950903 cri.go:89] found id: ""
I0120 14:32:54.665143 950903 logs.go:282] 1 containers: [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6]
I0120 14:32:54.665201 950903 ssh_runner.go:195] Run: which crictl
I0120 14:32:54.668912 950903 logs.go:123] Gathering logs for kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] ...
I0120 14:32:54.668936 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
I0120 14:32:54.731588 950903 logs.go:123] Gathering logs for kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] ...
I0120 14:32:54.731623 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
I0120 14:32:54.798223 950903 logs.go:123] Gathering logs for kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] ...
I0120 14:32:54.798262 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
I0120 14:32:54.849667 950903 logs.go:123] Gathering logs for describe nodes ...
I0120 14:32:54.849699 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:32:55.017611 950903 logs.go:123] Gathering logs for kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] ...
I0120 14:32:55.017703 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
I0120 14:32:55.079897 950903 logs.go:123] Gathering logs for kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] ...
I0120 14:32:55.079935 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
I0120 14:32:55.127145 950903 logs.go:123] Gathering logs for kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] ...
I0120 14:32:55.127184 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
I0120 14:32:55.179168 950903 logs.go:123] Gathering logs for kubelet ...
I0120 14:32:55.179197 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 14:32:55.231529 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.690899 663 reflector.go:138] object-"kube-system"/"coredns-token-f95sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f95sh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.231791 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691117 663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232001 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691376 663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232213 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691453 663 reflector.go:138] object-"default"/"default-token-8wp7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8wp7x" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232424 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691503 663 reflector.go:138] object-"kube-system"/"kindnet-token-xx7dh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xx7dh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232643 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691562 663 reflector.go:138] object-"kube-system"/"kube-proxy-token-s6tbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s6tbt" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.232867 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691635 663 reflector.go:138] object-"kube-system"/"metrics-server-token-dgscp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dgscp" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.233121 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.692028 663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mlrbf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-mlrbf" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.242036 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:27 old-k8s-version-140749 kubelet[663]: E0120 14:27:27.904251 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.242233 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:28 old-k8s-version-140749 kubelet[663]: E0120 14:27:28.466147 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.245063 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:41 old-k8s-version-140749 kubelet[663]: E0120 14:27:41.953783 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.246929 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:43 old-k8s-version-140749 kubelet[663]: E0120 14:27:43.761273 663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-xh79t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-xh79t" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-140749' and this object
W0120 14:32:55.247464 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:55 old-k8s-version-140749 kubelet[663]: E0120 14:27:55.965627 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.248064 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:57 old-k8s-version-140749 kubelet[663]: E0120 14:27:57.605396 663 pod_workers.go:191] Error syncing pod e9c231b5-a5c1-498d-aa26-caf987208dc2 ("storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"
W0120 14:32:55.248529 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:58 old-k8s-version-140749 kubelet[663]: E0120 14:27:58.615334 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.248857 950903 logs.go:138] Found kubelet problem: Jan 20 14:27:59 old-k8s-version-140749 kubelet[663]: E0120 14:27:59.633074 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.249550 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:05 old-k8s-version-140749 kubelet[663]: E0120 14:28:05.586265 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.252091 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:06 old-k8s-version-140749 kubelet[663]: E0120 14:28:06.962070 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.252545 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:17 old-k8s-version-140749 kubelet[663]: E0120 14:28:17.944316 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.253009 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:18 old-k8s-version-140749 kubelet[663]: E0120 14:28:18.685866 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.253399 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:25 old-k8s-version-140749 kubelet[663]: E0120 14:28:25.586275 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.253597 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:32 old-k8s-version-140749 kubelet[663]: E0120 14:28:32.945271 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.253925 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:37 old-k8s-version-140749 kubelet[663]: E0120 14:28:37.943400 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.254111 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:46 old-k8s-version-140749 kubelet[663]: E0120 14:28:46.943760 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.254695 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:51 old-k8s-version-140749 kubelet[663]: E0120 14:28:51.768837 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.255022 950903 logs.go:138] Found kubelet problem: Jan 20 14:28:55 old-k8s-version-140749 kubelet[663]: E0120 14:28:55.585724 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.258008 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:00 old-k8s-version-140749 kubelet[663]: E0120 14:29:00.952397 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.258363 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:10 old-k8s-version-140749 kubelet[663]: E0120 14:29:10.942909 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.258551 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:11 old-k8s-version-140749 kubelet[663]: E0120 14:29:11.944209 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.258884 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:22 old-k8s-version-140749 kubelet[663]: E0120 14:29:22.951255 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.259078 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:23 old-k8s-version-140749 kubelet[663]: E0120 14:29:23.956425 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.259667 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:35 old-k8s-version-140749 kubelet[663]: E0120 14:29:35.903419 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.259852 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:36 old-k8s-version-140749 kubelet[663]: E0120 14:29:36.945915 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.260180 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:45 old-k8s-version-140749 kubelet[663]: E0120 14:29:45.585844 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.260364 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:51 old-k8s-version-140749 kubelet[663]: E0120 14:29:51.943550 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.260690 950903 logs.go:138] Found kubelet problem: Jan 20 14:29:59 old-k8s-version-140749 kubelet[663]: E0120 14:29:59.943021 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.260876 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:05 old-k8s-version-140749 kubelet[663]: E0120 14:30:05.943119 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.261204 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:13 old-k8s-version-140749 kubelet[663]: E0120 14:30:13.942813 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.261391 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:17 old-k8s-version-140749 kubelet[663]: E0120 14:30:17.943166 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.261725 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.943282 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.264343 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.959102 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 14:32:55.264682 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:40 old-k8s-version-140749 kubelet[663]: E0120 14:30:40.946333 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.264869 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:43 old-k8s-version-140749 kubelet[663]: E0120 14:30:43.946388 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.265195 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:52 old-k8s-version-140749 kubelet[663]: E0120 14:30:52.943384 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.265378 950903 logs.go:138] Found kubelet problem: Jan 20 14:30:57 old-k8s-version-140749 kubelet[663]: E0120 14:30:57.943462 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.265970 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:04 old-k8s-version-140749 kubelet[663]: E0120 14:31:04.184881 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.266300 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:05 old-k8s-version-140749 kubelet[663]: E0120 14:31:05.586278 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.266484 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:11 old-k8s-version-140749 kubelet[663]: E0120 14:31:11.943489 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.266811 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:19 old-k8s-version-140749 kubelet[663]: E0120 14:31:19.942873 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.266995 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.267180 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.267508 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.267693 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.268018 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.268202 950903 logs.go:138] Found kubelet problem: Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.268526 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.268851 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.269034 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.269217 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.269551 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.269743 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.270064 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.270393 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.270576 950903 logs.go:138] Found kubelet problem: Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 14:32:55.270586 950903 logs.go:123] Gathering logs for etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] ...
I0120 14:32:55.270600 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
I0120 14:32:55.318446 950903 logs.go:123] Gathering logs for coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] ...
I0120 14:32:55.318482 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
I0120 14:32:55.374342 950903 logs.go:123] Gathering logs for dmesg ...
I0120 14:32:55.374372 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:32:55.397751 950903 logs.go:123] Gathering logs for etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] ...
I0120 14:32:55.397781 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
I0120 14:32:55.441396 950903 logs.go:123] Gathering logs for kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] ...
I0120 14:32:55.441427 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
I0120 14:32:55.485012 950903 logs.go:123] Gathering logs for kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] ...
I0120 14:32:55.485049 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
I0120 14:32:55.538388 950903 logs.go:123] Gathering logs for storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] ...
I0120 14:32:55.538415 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
I0120 14:32:55.603551 950903 logs.go:123] Gathering logs for storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] ...
I0120 14:32:55.603583 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
I0120 14:32:55.653716 950903 logs.go:123] Gathering logs for kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] ...
I0120 14:32:55.653743 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
I0120 14:32:55.705317 950903 logs.go:123] Gathering logs for kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] ...
I0120 14:32:55.705344 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
I0120 14:32:55.761106 950903 logs.go:123] Gathering logs for coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] ...
I0120 14:32:55.761142 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
I0120 14:32:55.800636 950903 logs.go:123] Gathering logs for kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] ...
I0120 14:32:55.800666 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
I0120 14:32:55.845669 950903 logs.go:123] Gathering logs for containerd ...
I0120 14:32:55.845701 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:32:55.917760 950903 logs.go:123] Gathering logs for container status ...
I0120 14:32:55.917799 950903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:32:55.994852 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:32:55.994879 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 14:32:55.994927 950903 out.go:270] X Problems detected in kubelet:
W0120 14:32:55.994945 950903 out.go:270] Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.994954 950903 out.go:270] Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 14:32:55.994966 950903 out.go:270] Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.994973 950903 out.go:270] Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
W0120 14:32:55.994985 950903 out.go:270] Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 14:32:55.994992 950903 out.go:358] Setting ErrFile to fd 2...
I0120 14:32:55.995007 950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:32:53.394116 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:55.394996 959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
I0120 14:32:55.897007 959078 pod_ready.go:82] duration metric: took 4m0.009316185s for pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace to be "Ready" ...
E0120 14:32:55.897032 959078 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 14:32:55.897043 959078 pod_ready.go:39] duration metric: took 4m0.630600399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:32:55.897057 959078 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:32:55.897084 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:32:55.897143 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:32:55.950262 959078 cri.go:89] found id: "6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0"
I0120 14:32:55.950284 959078 cri.go:89] found id: "3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
I0120 14:32:55.950290 959078 cri.go:89] found id: ""
I0120 14:32:55.950297 959078 logs.go:282] 2 containers: [6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0 3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d]
I0120 14:32:55.950357 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:55.955259 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:55.966194 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:32:55.966277 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:32:56.024403 959078 cri.go:89] found id: "dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1"
I0120 14:32:56.024423 959078 cri.go:89] found id: "8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
I0120 14:32:56.024428 959078 cri.go:89] found id: ""
I0120 14:32:56.024436 959078 logs.go:282] 2 containers: [dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1 8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa]
I0120 14:32:56.024500 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.029413 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.034504 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:32:56.034584 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:32:56.076966 959078 cri.go:89] found id: "629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e"
I0120 14:32:56.076993 959078 cri.go:89] found id: "d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
I0120 14:32:56.076998 959078 cri.go:89] found id: ""
I0120 14:32:56.077006 959078 logs.go:282] 2 containers: [629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386]
I0120 14:32:56.077080 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.081115 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.086545 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:32:56.086669 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:32:56.126757 959078 cri.go:89] found id: "12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90"
I0120 14:32:56.126782 959078 cri.go:89] found id: "3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
I0120 14:32:56.126788 959078 cri.go:89] found id: ""
I0120 14:32:56.126796 959078 logs.go:282] 2 containers: [12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90 3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f]
I0120 14:32:56.126859 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.130545 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.134075 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:32:56.134178 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:32:56.181428 959078 cri.go:89] found id: "93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830"
I0120 14:32:56.181452 959078 cri.go:89] found id: "057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
I0120 14:32:56.181456 959078 cri.go:89] found id: ""
I0120 14:32:56.181463 959078 logs.go:282] 2 containers: [93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830 057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1]
I0120 14:32:56.181554 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.185580 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.189242 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:32:56.189321 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:32:56.242280 959078 cri.go:89] found id: "e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb"
I0120 14:32:56.242301 959078 cri.go:89] found id: "949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
I0120 14:32:56.242306 959078 cri.go:89] found id: ""
I0120 14:32:56.242314 959078 logs.go:282] 2 containers: [e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb 949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5]
I0120 14:32:56.242371 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.246205 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.250037 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:32:56.250117 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:32:56.291743 959078 cri.go:89] found id: "5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b"
I0120 14:32:56.291766 959078 cri.go:89] found id: "5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
I0120 14:32:56.291772 959078 cri.go:89] found id: ""
I0120 14:32:56.291779 959078 logs.go:282] 2 containers: [5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b 5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315]
I0120 14:32:56.291838 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.295404 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.302789 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:32:56.302876 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:32:56.358332 959078 cri.go:89] found id: "a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6"
I0120 14:32:56.358355 959078 cri.go:89] found id: ""
I0120 14:32:56.358364 959078 logs.go:282] 1 containers: [a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6]
I0120 14:32:56.358419 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.362394 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:32:56.362475 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:32:56.417727 959078 cri.go:89] found id: "f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8"
I0120 14:32:56.417749 959078 cri.go:89] found id: "b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb"
I0120 14:32:56.417754 959078 cri.go:89] found id: ""
I0120 14:32:56.417761 959078 logs.go:282] 2 containers: [f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8 b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb]
I0120 14:32:56.417817 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.421278 959078 ssh_runner.go:195] Run: which crictl
I0120 14:32:56.425257 959078 logs.go:123] Gathering logs for kubernetes-dashboard [a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6] ...
I0120 14:32:56.425287 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6"
I0120 14:32:56.477196 959078 logs.go:123] Gathering logs for describe nodes ...
I0120 14:32:56.477225 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:32:56.629894 959078 logs.go:123] Gathering logs for kube-apiserver [3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d] ...
I0120 14:32:56.629964 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
I0120 14:32:56.685123 959078 logs.go:123] Gathering logs for etcd [8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa] ...
I0120 14:32:56.685400 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
I0120 14:32:56.745201 959078 logs.go:123] Gathering logs for coredns [629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e] ...
I0120 14:32:56.745278 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e"
I0120 14:32:56.792649 959078 logs.go:123] Gathering logs for kube-controller-manager [e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb] ...
I0120 14:32:56.792682 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb"
I0120 14:32:56.859606 959078 logs.go:123] Gathering logs for kube-controller-manager [949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5] ...
I0120 14:32:56.859646 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
I0120 14:32:56.922708 959078 logs.go:123] Gathering logs for container status ...
I0120 14:32:56.922750 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:32:56.982342 959078 logs.go:123] Gathering logs for kindnet [5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b] ...
I0120 14:32:56.982373 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b"
I0120 14:32:57.033231 959078 logs.go:123] Gathering logs for kubelet ...
I0120 14:32:57.033261 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 14:32:57.122895 959078 logs.go:123] Gathering logs for dmesg ...
I0120 14:32:57.122936 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:32:57.139634 959078 logs.go:123] Gathering logs for kube-apiserver [6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0] ...
I0120 14:32:57.139686 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0"
I0120 14:32:57.194713 959078 logs.go:123] Gathering logs for kube-scheduler [12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90] ...
I0120 14:32:57.194847 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90"
I0120 14:32:57.234732 959078 logs.go:123] Gathering logs for kube-scheduler [3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f] ...
I0120 14:32:57.234760 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
I0120 14:32:57.287201 959078 logs.go:123] Gathering logs for kube-proxy [93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830] ...
I0120 14:32:57.287241 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830"
I0120 14:32:57.347927 959078 logs.go:123] Gathering logs for kube-proxy [057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1] ...
I0120 14:32:57.347961 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
I0120 14:32:57.399223 959078 logs.go:123] Gathering logs for storage-provisioner [f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8] ...
I0120 14:32:57.399255 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8"
I0120 14:32:57.440813 959078 logs.go:123] Gathering logs for containerd ...
I0120 14:32:57.440895 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:32:57.507987 959078 logs.go:123] Gathering logs for etcd [dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1] ...
I0120 14:32:57.508026 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1"
I0120 14:32:57.557577 959078 logs.go:123] Gathering logs for coredns [d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386] ...
I0120 14:32:57.557681 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
I0120 14:32:57.601620 959078 logs.go:123] Gathering logs for kindnet [5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315] ...
I0120 14:32:57.601649 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
I0120 14:32:57.645710 959078 logs.go:123] Gathering logs for storage-provisioner [b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb] ...
I0120 14:32:57.645736 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb"
I0120 14:33:00.192118 959078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:33:00.212066 959078 api_server.go:72] duration metric: took 4m10.744671563s to wait for apiserver process to appear ...
I0120 14:33:00.212152 959078 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:33:00.212230 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:33:00.212349 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:33:00.272543 959078 cri.go:89] found id: "6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0"
I0120 14:33:00.272572 959078 cri.go:89] found id: "3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
I0120 14:33:00.272580 959078 cri.go:89] found id: ""
I0120 14:33:00.272588 959078 logs.go:282] 2 containers: [6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0 3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d]
I0120 14:33:00.272683 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.282927 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.290020 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:33:00.290143 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:33:00.357040 959078 cri.go:89] found id: "dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1"
I0120 14:33:00.357067 959078 cri.go:89] found id: "8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
I0120 14:33:00.357073 959078 cri.go:89] found id: ""
I0120 14:33:00.357080 959078 logs.go:282] 2 containers: [dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1 8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa]
I0120 14:33:00.357147 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.362205 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.366997 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:33:00.367100 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:33:00.412221 959078 cri.go:89] found id: "629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e"
I0120 14:33:00.412288 959078 cri.go:89] found id: "d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
I0120 14:33:00.412310 959078 cri.go:89] found id: ""
I0120 14:33:00.412324 959078 logs.go:282] 2 containers: [629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386]
I0120 14:33:00.412402 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.416260 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.419715 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:33:00.419799 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:33:00.459298 959078 cri.go:89] found id: "12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90"
I0120 14:33:00.459321 959078 cri.go:89] found id: "3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
I0120 14:33:00.459327 959078 cri.go:89] found id: ""
I0120 14:33:00.459334 959078 logs.go:282] 2 containers: [12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90 3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f]
I0120 14:33:00.459397 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.463492 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.467676 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:33:00.467806 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:33:00.514141 959078 cri.go:89] found id: "93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830"
I0120 14:33:00.514209 959078 cri.go:89] found id: "057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
I0120 14:33:00.514220 959078 cri.go:89] found id: ""
I0120 14:33:00.514229 959078 logs.go:282] 2 containers: [93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830 057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1]
I0120 14:33:00.514299 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.518838 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.522280 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:33:00.522421 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:33:00.562126 959078 cri.go:89] found id: "e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb"
I0120 14:33:00.562150 959078 cri.go:89] found id: "949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
I0120 14:33:00.562162 959078 cri.go:89] found id: ""
I0120 14:33:00.562171 959078 logs.go:282] 2 containers: [e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb 949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5]
I0120 14:33:00.562228 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.565971 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.569419 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:33:00.569504 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:33:00.610762 959078 cri.go:89] found id: "5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b"
I0120 14:33:00.610837 959078 cri.go:89] found id: "5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
I0120 14:33:00.610857 959078 cri.go:89] found id: ""
I0120 14:33:00.610873 959078 logs.go:282] 2 containers: [5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b 5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315]
I0120 14:33:00.610950 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.614551 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.618361 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:33:00.618458 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:33:00.658033 959078 cri.go:89] found id: "f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8"
I0120 14:33:00.658104 959078 cri.go:89] found id: "b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb"
I0120 14:33:00.658117 959078 cri.go:89] found id: ""
I0120 14:33:00.658126 959078 logs.go:282] 2 containers: [f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8 b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb]
I0120 14:33:00.658189 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.662156 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.665641 959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:33:00.665722 959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:33:00.710602 959078 cri.go:89] found id: "a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6"
I0120 14:33:00.710676 959078 cri.go:89] found id: ""
I0120 14:33:00.710691 959078 logs.go:282] 1 containers: [a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6]
I0120 14:33:00.710757 959078 ssh_runner.go:195] Run: which crictl
I0120 14:33:00.714662 959078 logs.go:123] Gathering logs for coredns [d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386] ...
I0120 14:33:00.714697 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
I0120 14:33:00.753753 959078 logs.go:123] Gathering logs for kube-controller-manager [e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb] ...
I0120 14:33:00.753836 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb"
I0120 14:33:00.815366 959078 logs.go:123] Gathering logs for kube-controller-manager [949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5] ...
I0120 14:33:00.815404 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
I0120 14:33:00.876962 959078 logs.go:123] Gathering logs for kindnet [5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b] ...
I0120 14:33:00.876996 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b"
I0120 14:33:00.926741 959078 logs.go:123] Gathering logs for dmesg ...
I0120 14:33:00.926773 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:33:00.950697 959078 logs.go:123] Gathering logs for kube-apiserver [3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d] ...
I0120 14:33:00.950726 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
I0120 14:33:01.022434 959078 logs.go:123] Gathering logs for etcd [dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1] ...
I0120 14:33:01.022469 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1"
I0120 14:33:01.066738 959078 logs.go:123] Gathering logs for describe nodes ...
I0120 14:33:01.066774 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:33:01.200945 959078 logs.go:123] Gathering logs for kindnet [5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315] ...
I0120 14:33:01.200982 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
I0120 14:33:01.245299 959078 logs.go:123] Gathering logs for storage-provisioner [f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8] ...
I0120 14:33:01.245330 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8"
I0120 14:33:01.312614 959078 logs.go:123] Gathering logs for containerd ...
I0120 14:33:01.312642 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:33:01.387399 959078 logs.go:123] Gathering logs for container status ...
I0120 14:33:01.387465 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:33:01.437969 959078 logs.go:123] Gathering logs for kubelet ...
I0120 14:33:01.438000 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 14:33:01.524000 959078 logs.go:123] Gathering logs for kube-apiserver [6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0] ...
I0120 14:33:01.524037 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0"
I0120 14:33:01.578285 959078 logs.go:123] Gathering logs for etcd [8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa] ...
I0120 14:33:01.578321 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
I0120 14:33:01.623669 959078 logs.go:123] Gathering logs for kube-proxy [93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830] ...
I0120 14:33:01.623704 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830"
I0120 14:33:01.669218 959078 logs.go:123] Gathering logs for kube-proxy [057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1] ...
I0120 14:33:01.669250 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
I0120 14:33:01.721751 959078 logs.go:123] Gathering logs for storage-provisioner [b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb] ...
I0120 14:33:01.721780 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb"
I0120 14:33:01.764526 959078 logs.go:123] Gathering logs for kubernetes-dashboard [a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6] ...
I0120 14:33:01.764555 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6"
I0120 14:33:01.811149 959078 logs.go:123] Gathering logs for coredns [629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e] ...
I0120 14:33:01.811179 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e"
I0120 14:33:01.857926 959078 logs.go:123] Gathering logs for kube-scheduler [12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90] ...
I0120 14:33:01.858015 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90"
I0120 14:33:01.901271 959078 logs.go:123] Gathering logs for kube-scheduler [3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f] ...
I0120 14:33:01.901300 959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
I0120 14:33:05.995189 950903 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0120 14:33:06.005351 950903 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0120 14:33:06.009443 950903 out.go:201]
W0120 14:33:06.013033 950903 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0120 14:33:06.013087 950903 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0120 14:33:06.013119 950903 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0120 14:33:06.013130 950903 out.go:270] *
W0120 14:33:06.014124 950903 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0120 14:33:06.017802 950903 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7695074e176e0 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 34d743f08be0e dashboard-metrics-scraper-8d5bb5db8-glscn
0731b37e3a8d5 ba04bb24b9575 4 minutes ago Running storage-provisioner 2 3c15f60dbf5b9 storage-provisioner
c1745625d0923 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 34ec62419cbc9 kubernetes-dashboard-cd95d586-rckbc
15e6eca40378b 2be0bcf609c65 5 minutes ago Running kindnet-cni 1 5cea1cf13e7b6 kindnet-7z8qd
46dbef2bf421e ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 3c15f60dbf5b9 storage-provisioner
df227ea0cd40a db91994f4ee8f 5 minutes ago Running coredns 1 33e86d1ef1601 coredns-74ff55c5b-qsqbp
980a433503981 25a5233254979 5 minutes ago Running kube-proxy 1 d007e9d87e1d4 kube-proxy-wrpl6
d36017b735848 1611cd07b61d5 5 minutes ago Running busybox 1 808c2b428e0fb busybox
cf07d13821464 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 ebf23fa5e76a0 kube-controller-manager-old-k8s-version-140749
7cbffdc94e647 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 d8c067ab50b6f kube-apiserver-old-k8s-version-140749
901324074aae3 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 ee07298b7a09c kube-scheduler-old-k8s-version-140749
260a4c4121f58 05b738aa1bc63 5 minutes ago Running etcd 1 6b43306dbf679 etcd-old-k8s-version-140749
296c4154063f4 1611cd07b61d5 6 minutes ago Exited busybox 0 80c26978fc2bd busybox
49305c6d7d9da db91994f4ee8f 7 minutes ago Exited coredns 0 27cf8a620ac5f coredns-74ff55c5b-qsqbp
4b0e77b57208a 2be0bcf609c65 7 minutes ago Exited kindnet-cni 0 262232e33b738 kindnet-7z8qd
4161d34b27869 25a5233254979 7 minutes ago Exited kube-proxy 0 a7655b564d91f kube-proxy-wrpl6
4dc67e60f527c 05b738aa1bc63 8 minutes ago Exited etcd 0 45122d8ef8f9e etcd-old-k8s-version-140749
a38942066106c e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 7e2cb02745e89 kube-scheduler-old-k8s-version-140749
f1fd5c8cbb787 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 f667b2f8b920f kube-controller-manager-old-k8s-version-140749
032a69713fb6a 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 998b8736a7e8b kube-apiserver-old-k8s-version-140749
==> containerd <==
Jan 20 14:29:00 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:00.951886437Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 20 14:29:34 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:34.950990972Z" level=info msg="CreateContainer within sandbox \"34d743f08be0e3144c74273abd1d5c121fb5a75561a4bb7e6c59af7060fc1109\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Jan 20 14:29:34 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:34.974698206Z" level=info msg="CreateContainer within sandbox \"34d743f08be0e3144c74273abd1d5c121fb5a75561a4bb7e6c59af7060fc1109\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\""
Jan 20 14:29:34 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:34.975748295Z" level=info msg="StartContainer for \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\""
Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.057059200Z" level=info msg="StartContainer for \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\" returns successfully"
Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.057332202Z" level=info msg="received exit event container_id:\"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\" id:\"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\" pid:3101 exit_status:255 exited_at:{seconds:1737383375 nanos:56479136}"
Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.084398301Z" level=info msg="shim disconnected" id=810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c namespace=k8s.io
Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.084465633Z" level=warning msg="cleaning up after shim disconnected" id=810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c namespace=k8s.io
Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.084518310Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.905431906Z" level=info msg="RemoveContainer for \"379fb20749554b9c5354559de926426f3a11f312adb0394e659c317412686a8a\""
Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.919071004Z" level=info msg="RemoveContainer for \"379fb20749554b9c5354559de926426f3a11f312adb0394e659c317412686a8a\" returns successfully"
Jan 20 14:30:28 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:30:28.949694920Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:30:28 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:30:28.955017979Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Jan 20 14:30:28 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:30:28.958062365Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jan 20 14:30:28 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:30:28.958173282Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 20 14:31:03 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:03.944750242Z" level=info msg="CreateContainer within sandbox \"34d743f08be0e3144c74273abd1d5c121fb5a75561a4bb7e6c59af7060fc1109\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Jan 20 14:31:03 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:03.965704357Z" level=info msg="CreateContainer within sandbox \"34d743f08be0e3144c74273abd1d5c121fb5a75561a4bb7e6c59af7060fc1109\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\""
Jan 20 14:31:03 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:03.966502161Z" level=info msg="StartContainer for \"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\""
Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.037924212Z" level=info msg="StartContainer for \"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\" returns successfully"
Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.037994104Z" level=info msg="received exit event container_id:\"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\" id:\"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\" pid:3331 exit_status:255 exited_at:{seconds:1737383464 nanos:37380760}"
Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.064498371Z" level=info msg="shim disconnected" id=7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370 namespace=k8s.io
Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.064562634Z" level=warning msg="cleaning up after shim disconnected" id=7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370 namespace=k8s.io
Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.064575180Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.187184408Z" level=info msg="RemoveContainer for \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\""
Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.200993893Z" level=info msg="RemoveContainer for \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\" returns successfully"
==> coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:50938 - 37407 "HINFO IN 9169729266110919042.8718230186487203758. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011532726s
==> coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:45027 - 43048 "HINFO IN 1347004244483538843.143973130411976210. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012318517s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0120 14:27:57.287915 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 14:27:27.287417121 +0000 UTC m=+0.032230241) (total time: 30.000395308s):
Trace[2019727887]: [30.000395308s] [30.000395308s] END
E0120 14:27:57.287951 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0120 14:27:57.288045 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 14:27:27.287817795 +0000 UTC m=+0.032630915) (total time: 30.000216272s):
Trace[939984059]: [30.000216272s] [30.000216272s] END
E0120 14:27:57.288050 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0120 14:27:57.288390 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 14:27:27.288034329 +0000 UTC m=+0.032847441) (total time: 30.000340252s):
Trace[911902081]: [30.000340252s] [30.000340252s] END
E0120 14:27:57.288403 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: old-k8s-version-140749
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-140749
kubernetes.io/os=linux
minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
minikube.k8s.io/name=old-k8s-version-140749
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_20T14_24_51_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 20 Jan 2025 14:24:47 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-140749
AcquireTime: <unset>
RenewTime: Mon, 20 Jan 2025 14:33:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 20 Jan 2025 14:28:14 +0000 Mon, 20 Jan 2025 14:24:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 20 Jan 2025 14:28:14 +0000 Mon, 20 Jan 2025 14:24:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 20 Jan 2025 14:28:14 +0000 Mon, 20 Jan 2025 14:24:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 20 Jan 2025 14:28:14 +0000 Mon, 20 Jan 2025 14:25:07 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-140749
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022292Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022292Ki
pods: 110
System Info:
Machine ID: 4382c30ae98a43cd9832cdf594ab0620
System UUID: 9ce90266-c33e-4cc5-b4a8-d30bd2d0e32d
Boot ID: 1cf72276-e5cc-4a75-95c3-e1897ed2b9a5
Kernel Version: 5.15.0-1075-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.24
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m34s
kube-system coredns-74ff55c5b-qsqbp 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m
kube-system etcd-old-k8s-version-140749 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m8s
kube-system kindnet-7z8qd 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m
kube-system kube-apiserver-old-k8s-version-140749 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m8s
kube-system kube-controller-manager-old-k8s-version-140749 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m8s
kube-system kube-proxy-wrpl6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m
kube-system kube-scheduler-old-k8s-version-140749 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m8s
kube-system metrics-server-9975d5f86-lfq2q 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m22s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m58s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-glscn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m24s
kubernetes-dashboard kubernetes-dashboard-cd95d586-rckbc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m24s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m28s (x5 over 8m29s) kubelet Node old-k8s-version-140749 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m28s (x5 over 8m29s) kubelet Node old-k8s-version-140749 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m28s (x4 over 8m29s) kubelet Node old-k8s-version-140749 status is now: NodeHasSufficientPID
Normal Starting 8m8s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m8s kubelet Node old-k8s-version-140749 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m8s kubelet Node old-k8s-version-140749 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m8s kubelet Node old-k8s-version-140749 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m8s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m kubelet Node old-k8s-version-140749 status is now: NodeReady
Normal Starting 7m59s kube-proxy Starting kube-proxy.
Normal Starting 5m55s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m55s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m54s (x8 over 5m55s) kubelet Node old-k8s-version-140749 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m54s (x8 over 5m55s) kubelet Node old-k8s-version-140749 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m54s (x7 over 5m55s) kubelet Node old-k8s-version-140749 status is now: NodeHasSufficientPID
Normal Starting 5m40s kube-proxy Starting kube-proxy.
==> dmesg <==
[Jan20 14:12] systemd-journald[216]: Failed to send WATCHDOG=1 notification message: Connection refused
==> etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] <==
2025-01-20 14:28:59.221447 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:29:09.221468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:29:19.221333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:29:29.221185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:29:39.221227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:29:49.221425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:29:59.221377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:30:09.221279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:30:19.221409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:30:29.221197 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:30:39.221353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:30:49.221245 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:30:59.221549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:31:09.221380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:31:19.222302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:31:29.221198 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:31:39.221240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:31:49.221357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:31:59.221214 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:32:09.221388 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:32:19.221341 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:32:29.221184 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:32:39.221504 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:32:49.221192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:32:59.221318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] <==
2025-01-20 14:24:39.930436 I | embed: listening for metrics on http://127.0.0.1:2381
raft2025/01/20 14:24:40 INFO: 9f0758e1c58a86ed is starting a new election at term 1
raft2025/01/20 14:24:40 INFO: 9f0758e1c58a86ed became candidate at term 2
raft2025/01/20 14:24:40 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
raft2025/01/20 14:24:40 INFO: 9f0758e1c58a86ed became leader at term 2
raft2025/01/20 14:24:40 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2025-01-20 14:24:40.704849 I | etcdserver: published {Name:old-k8s-version-140749 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2025-01-20 14:24:40.705040 I | embed: ready to serve client requests
2025-01-20 14:24:40.706590 I | embed: serving client requests on 192.168.85.2:2379
2025-01-20 14:24:40.745632 I | etcdserver: setting up the initial cluster version to 3.4
2025-01-20 14:24:40.746761 N | etcdserver/membership: set the initial cluster version to 3.4
2025-01-20 14:24:40.746961 I | embed: ready to serve client requests
2025-01-20 14:24:40.760897 I | embed: serving client requests on 127.0.0.1:2379
2025-01-20 14:24:40.830673 I | etcdserver/api: enabled capabilities for version 3.4
2025-01-20 14:25:04.292803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:25:13.415355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:25:23.415224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:25:33.415203 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:25:43.415229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:25:53.415193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:26:03.415251 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:26:13.415363 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:26:23.415161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:26:33.415215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 14:26:43.415410 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
14:33:07 up 4:15, 0 users, load average: 0.97, 2.15, 2.80
Linux old-k8s-version-140749 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] <==
I0120 14:30:58.622867 1 main.go:301] handling current node
I0120 14:31:08.624120 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:31:08.624355 1 main.go:301] handling current node
I0120 14:31:18.630874 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:31:18.630918 1 main.go:301] handling current node
I0120 14:31:28.623214 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:31:28.623249 1 main.go:301] handling current node
I0120 14:31:38.628221 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:31:38.628261 1 main.go:301] handling current node
I0120 14:31:48.631464 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:31:48.631559 1 main.go:301] handling current node
I0120 14:31:58.631421 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:31:58.631457 1 main.go:301] handling current node
I0120 14:32:08.628299 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:32:08.628336 1 main.go:301] handling current node
I0120 14:32:18.632225 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:32:18.632263 1 main.go:301] handling current node
I0120 14:32:28.623452 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:32:28.623488 1 main.go:301] handling current node
I0120 14:32:38.630150 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:32:38.630187 1 main.go:301] handling current node
I0120 14:32:48.631068 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:32:48.631170 1 main.go:301] handling current node
I0120 14:32:58.628006 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:32:58.628109 1 main.go:301] handling current node
==> kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] <==
I0120 14:25:11.129475 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I0120 14:25:11.523616 1 controller.go:361] Starting controller kube-network-policies
I0120 14:25:11.523979 1 controller.go:365] Waiting for informer caches to sync
I0120 14:25:11.524079 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0120 14:25:11.724462 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0120 14:25:11.724491 1 metrics.go:61] Registering metrics
I0120 14:25:11.724715 1 controller.go:401] Syncing nftables rules
I0120 14:25:21.530835 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:25:21.530897 1 main.go:301] handling current node
I0120 14:25:31.523111 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:25:31.523151 1 main.go:301] handling current node
I0120 14:25:41.532113 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:25:41.532161 1 main.go:301] handling current node
I0120 14:25:51.528038 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:25:51.528072 1 main.go:301] handling current node
I0120 14:26:01.523660 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:26:01.523699 1 main.go:301] handling current node
I0120 14:26:11.522899 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:26:11.522936 1 main.go:301] handling current node
I0120 14:26:21.522574 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:26:21.522613 1 main.go:301] handling current node
I0120 14:26:31.530416 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:26:31.530451 1 main.go:301] handling current node
I0120 14:26:41.523257 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 14:26:41.523347 1 main.go:301] handling current node
==> kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] <==
I0120 14:24:48.407504 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0120 14:24:48.407825 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0120 14:24:48.433081 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0120 14:24:48.437427 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0120 14:24:48.437457 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0120 14:24:48.994160 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0120 14:24:49.051743 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0120 14:24:49.183703 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
I0120 14:24:49.185327 1 controller.go:606] quota admission added evaluator for: endpoints
I0120 14:24:49.195815 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0120 14:24:50.075246 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0120 14:24:50.665254 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0120 14:24:50.735202 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0120 14:24:59.153925 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0120 14:25:07.535612 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0120 14:25:07.673689 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0120 14:25:11.572350 1 client.go:360] parsed scheme: "passthrough"
I0120 14:25:11.572480 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:25:11.572560 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 14:25:52.106873 1 client.go:360] parsed scheme: "passthrough"
I0120 14:25:52.106922 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:25:52.106932 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 14:26:24.548264 1 client.go:360] parsed scheme: "passthrough"
I0120 14:26:24.548307 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:26:24.548316 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] <==
I0120 14:29:51.905864 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:29:51.905899 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 14:30:25.304025 1 client.go:360] parsed scheme: "passthrough"
I0120 14:30:25.304075 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:30:25.304085 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0120 14:30:27.414181 1 handler_proxy.go:102] no RequestInfo found in the context
E0120 14:30:27.414283 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0120 14:30:27.414297 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 14:30:58.441275 1 client.go:360] parsed scheme: "passthrough"
I0120 14:30:58.441319 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:30:58.441328 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 14:31:29.891590 1 client.go:360] parsed scheme: "passthrough"
I0120 14:31:29.891647 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:31:29.891655 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 14:32:00.549550 1 client.go:360] parsed scheme: "passthrough"
I0120 14:32:00.549619 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:32:00.549782 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0120 14:32:24.848487 1 handler_proxy.go:102] no RequestInfo found in the context
E0120 14:32:24.848574 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0120 14:32:24.848586 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 14:32:37.968457 1 client.go:360] parsed scheme: "passthrough"
I0120 14:32:37.968506 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 14:32:37.968516 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] <==
I0120 14:28:48.130610 1 request.go:655] Throttling request took 1.048278816s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 14:28:48.983404 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 14:29:15.044263 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 14:29:20.633782 1 request.go:655] Throttling request took 1.048267223s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 14:29:21.485268 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 14:29:45.546499 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 14:29:53.135704 1 request.go:655] Throttling request took 1.048510892s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1?timeout=32s
W0120 14:29:53.987253 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 14:30:16.048578 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 14:30:25.637696 1 request.go:655] Throttling request took 1.048336549s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
W0120 14:30:26.489106 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 14:30:46.550521 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 14:30:58.139528 1 request.go:655] Throttling request took 1.048395462s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
W0120 14:30:58.990963 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 14:31:17.052505 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 14:31:30.641236 1 request.go:655] Throttling request took 1.048195216s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 14:31:31.492764 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 14:31:47.554280 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 14:32:03.143197 1 request.go:655] Throttling request took 1.048115109s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 14:32:03.994652 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 14:32:18.056238 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 14:32:35.645208 1 request.go:655] Throttling request took 1.048310736s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 14:32:36.496590 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 14:32:48.558309 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 14:33:08.147329 1 request.go:655] Throttling request took 1.048099289s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
==> kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] <==
I0120 14:25:07.571452 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0120 14:25:07.571580 1 shared_informer.go:247] Caches are synced for PVC protection
I0120 14:25:07.571594 1 shared_informer.go:247] Caches are synced for TTL
I0120 14:25:07.581084 1 shared_informer.go:247] Caches are synced for GC
I0120 14:25:07.571775 1 shared_informer.go:247] Caches are synced for daemon sets
I0120 14:25:07.574121 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0120 14:25:07.577338 1 shared_informer.go:247] Caches are synced for ReplicationController
I0120 14:25:07.583823 1 shared_informer.go:247] Caches are synced for attach detach
I0120 14:25:07.710314 1 shared_informer.go:247] Caches are synced for job
I0120 14:25:07.726431 1 shared_informer.go:247] Caches are synced for resource quota
I0120 14:25:07.734594 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7z8qd"
I0120 14:25:07.740645 1 shared_informer.go:247] Caches are synced for namespace
I0120 14:25:07.759075 1 shared_informer.go:247] Caches are synced for resource quota
I0120 14:25:07.780689 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wrpl6"
I0120 14:25:07.825118 1 shared_informer.go:247] Caches are synced for service account
I0120 14:25:07.908910 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0120 14:25:08.209143 1 shared_informer.go:247] Caches are synced for garbage collector
I0120 14:25:08.221149 1 shared_informer.go:247] Caches are synced for garbage collector
I0120 14:25:08.221188 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0120 14:25:09.152671 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0120 14:25:09.176917 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-876j9"
I0120 14:25:12.529155 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0120 14:26:44.025850 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0120 14:26:44.049498 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
E0120 14:26:44.072901 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
==> kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] <==
I0120 14:25:08.758274 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0120 14:25:08.758397 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0120 14:25:08.787279 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0120 14:25:08.787392 1 server_others.go:185] Using iptables Proxier.
I0120 14:25:08.787612 1 server.go:650] Version: v1.20.0
I0120 14:25:08.788119 1 config.go:315] Starting service config controller
I0120 14:25:08.788137 1 shared_informer.go:240] Waiting for caches to sync for service config
I0120 14:25:08.790184 1 config.go:224] Starting endpoint slice config controller
I0120 14:25:08.790198 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0120 14:25:08.888261 1 shared_informer.go:247] Caches are synced for service config
I0120 14:25:08.890488 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] <==
I0120 14:27:27.477506 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0120 14:27:27.477659 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0120 14:27:27.503875 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0120 14:27:27.504148 1 server_others.go:185] Using iptables Proxier.
I0120 14:27:27.504508 1 server.go:650] Version: v1.20.0
I0120 14:27:27.507857 1 config.go:315] Starting service config controller
I0120 14:27:27.578106 1 shared_informer.go:240] Waiting for caches to sync for service config
I0120 14:27:27.508046 1 config.go:224] Starting endpoint slice config controller
I0120 14:27:27.578144 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0120 14:27:27.678336 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0120 14:27:27.678409 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] <==
I0120 14:27:16.246703 1 serving.go:331] Generated self-signed cert in-memory
W0120 14:27:23.283560 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0120 14:27:23.284603 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0120 14:27:23.284698 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0120 14:27:23.284774 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0120 14:27:23.744988 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 14:27:23.745017 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 14:27:23.752790 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0120 14:27:23.752895 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
E0120 14:27:23.839998 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 14:27:23.840103 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 14:27:23.840166 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 14:27:23.840221 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 14:27:23.840277 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 14:27:23.840331 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 14:27:23.840385 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 14:27:23.842877 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 14:27:23.856564 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 14:27:23.856607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 14:27:23.856653 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 14:27:24.006024 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0120 14:27:25.046506 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] <==
W0120 14:24:47.472770 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0120 14:24:47.472873 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0120 14:24:47.542200 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0120 14:24:47.542703 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 14:24:47.542713 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 14:24:47.542731 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0120 14:24:47.575533 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 14:24:47.575871 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 14:24:47.575970 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 14:24:47.576088 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 14:24:47.576164 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 14:24:47.576233 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 14:24:47.576297 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 14:24:47.576423 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 14:24:47.576502 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 14:24:47.576645 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 14:24:47.576670 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 14:24:47.576746 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 14:24:48.466167 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 14:24:48.481444 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 14:24:48.595638 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 14:24:48.640072 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 14:24:48.692478 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 14:24:48.693701 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0120 14:24:50.842835 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: I0120 14:31:34.943776 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: I0120 14:31:48.942536 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: I0120 14:32:00.942559 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: I0120 14:32:11.942457 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: I0120 14:32:26.942542 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: I0120 14:32:40.942757 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: I0120 14:32:51.942455 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:33:02 old-k8s-version-140749 kubelet[663]: E0120 14:33:02.946128 663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:33:04 old-k8s-version-140749 kubelet[663]: I0120 14:33:04.946347 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
Jan 20 14:33:04 old-k8s-version-140749 kubelet[663]: E0120 14:33:04.946800 663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
==> kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] <==
2025/01/20 14:27:49 Using namespace: kubernetes-dashboard
2025/01/20 14:27:49 Using in-cluster config to connect to apiserver
2025/01/20 14:27:49 Using secret token for csrf signing
2025/01/20 14:27:49 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/01/20 14:27:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/01/20 14:27:49 Successful initial request to the apiserver, version: v1.20.0
2025/01/20 14:27:49 Generating JWE encryption key
2025/01/20 14:27:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/01/20 14:27:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/01/20 14:27:51 Initializing JWE encryption key from synchronized object
2025/01/20 14:27:51 Creating in-cluster Sidecar client
2025/01/20 14:27:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:27:51 Serving insecurely on HTTP port: 9090
2025/01/20 14:28:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:28:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:29:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:29:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:30:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:30:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:31:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:31:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:32:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:32:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:27:49 Starting overwatch
==> storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] <==
I0120 14:28:10.078388 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0120 14:28:10.094148 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0120 14:28:10.094197 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0120 14:28:27.551673 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0120 14:28:27.551733 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8983bdf6-654b-4c06-b3a1-5b8a77c8aef3", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-140749_b6684aa4-3b7c-452e-bbf2-a4a424253972 became leader
I0120 14:28:27.552112 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-140749_b6684aa4-3b7c-452e-bbf2-a4a424253972!
I0120 14:28:27.654331 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-140749_b6684aa4-3b7c-452e-bbf2-a4a424253972!
==> storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] <==
I0120 14:27:27.399505 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0120 14:27:57.401978 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140749 -n old-k8s-version-140749
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-140749 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-lfq2q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-140749 describe pod metrics-server-9975d5f86-lfq2q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-140749 describe pod metrics-server-9975d5f86-lfq2q: exit status 1 (96.441217ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-lfq2q" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-140749 describe pod metrics-server-9975d5f86-lfq2q: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (372.97s)