=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-813213 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-813213 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m12.364929582s)
-- stdout --
* [old-k8s-version-813213] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20317
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
* Using the docker driver based on existing profile
* Starting "old-k8s-version-813213" primary control-plane node in "old-k8s-version-813213" cluster
* Pulling base image v0.0.46 ...
* Restarting existing docker container for "old-k8s-version-813213" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-813213 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
-- /stdout --
** stderr **
I0127 13:18:13.624763 1391899 out.go:345] Setting OutFile to fd 1 ...
I0127 13:18:13.625186 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:18:13.625231 1391899 out.go:358] Setting ErrFile to fd 2...
I0127 13:18:13.625261 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:18:13.625650 1391899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
I0127 13:18:13.626248 1391899 out.go:352] Setting JSON to false
I0127 13:18:13.627752 1391899 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21639,"bootTime":1737962255,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I0127 13:18:13.627881 1391899 start.go:139] virtualization:
I0127 13:18:13.635568 1391899 out.go:177] * [old-k8s-version-813213] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0127 13:18:13.638498 1391899 out.go:177] - MINIKUBE_LOCATION=20317
I0127 13:18:13.638527 1391899 notify.go:220] Checking for updates...
I0127 13:18:13.644677 1391899 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 13:18:13.647182 1391899 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
I0127 13:18:13.649683 1391899 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
I0127 13:18:13.652516 1391899 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0127 13:18:13.655141 1391899 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 13:18:13.658354 1391899 config.go:182] Loaded profile config "old-k8s-version-813213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0127 13:18:13.661616 1391899 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
I0127 13:18:13.664176 1391899 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 13:18:13.731148 1391899 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0127 13:18:13.731314 1391899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 13:18:13.847397 1391899 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:61 SystemTime:2025-01-27 13:18:13.837458126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0127 13:18:13.847518 1391899 docker.go:318] overlay module found
I0127 13:18:13.850595 1391899 out.go:177] * Using the docker driver based on existing profile
I0127 13:18:13.853110 1391899 start.go:297] selected driver: docker
I0127 13:18:13.853149 1391899 start.go:901] validating driver "docker" against &{Name:old-k8s-version-813213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:18:13.853262 1391899 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 13:18:13.853997 1391899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 13:18:13.960155 1391899 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:61 SystemTime:2025-01-27 13:18:13.938431586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0127 13:18:13.960572 1391899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 13:18:13.960603 1391899 cni.go:84] Creating CNI manager for ""
I0127 13:18:13.960652 1391899 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0127 13:18:13.960697 1391899 start.go:340] cluster config:
{Name:old-k8s-version-813213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:18:13.965201 1391899 out.go:177] * Starting "old-k8s-version-813213" primary control-plane node in "old-k8s-version-813213" cluster
I0127 13:18:13.967819 1391899 cache.go:121] Beginning downloading kic base image for docker with containerd
I0127 13:18:13.970367 1391899 out.go:177] * Pulling base image v0.0.46 ...
I0127 13:18:13.972849 1391899 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 13:18:13.972908 1391899 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0127 13:18:13.972923 1391899 cache.go:56] Caching tarball of preloaded images
I0127 13:18:13.973021 1391899 preload.go:172] Found /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0127 13:18:13.973052 1391899 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0127 13:18:13.973165 1391899 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/config.json ...
I0127 13:18:13.973383 1391899 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0127 13:18:14.018222 1391899 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0127 13:18:14.018251 1391899 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0127 13:18:14.018264 1391899 cache.go:227] Successfully downloaded all kic artifacts
I0127 13:18:14.018289 1391899 start.go:360] acquireMachinesLock for old-k8s-version-813213: {Name:mkdb8ba967fbef4a000dd6e7c9825cdd41640f4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:18:14.018352 1391899 start.go:364] duration metric: took 40.943µs to acquireMachinesLock for "old-k8s-version-813213"
I0127 13:18:14.018379 1391899 start.go:96] Skipping create...Using existing machine configuration
I0127 13:18:14.018388 1391899 fix.go:54] fixHost starting:
I0127 13:18:14.018655 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
I0127 13:18:14.050862 1391899 fix.go:112] recreateIfNeeded on old-k8s-version-813213: state=Stopped err=<nil>
W0127 13:18:14.050897 1391899 fix.go:138] unexpected machine state, will restart: <nil>
I0127 13:18:14.053885 1391899 out.go:177] * Restarting existing docker container for "old-k8s-version-813213" ...
I0127 13:18:14.056502 1391899 cli_runner.go:164] Run: docker start old-k8s-version-813213
I0127 13:18:14.513107 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
I0127 13:18:14.552659 1391899 kic.go:430] container "old-k8s-version-813213" state is running.
I0127 13:18:14.553079 1391899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-813213
I0127 13:18:14.577336 1391899 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/config.json ...
I0127 13:18:14.577562 1391899 machine.go:93] provisionDockerMachine start ...
I0127 13:18:14.577626 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:14.610941 1391899 main.go:141] libmachine: Using SSH client type: native
I0127 13:18:14.611198 1391899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 34227 <nil> <nil>}
I0127 13:18:14.611207 1391899 main.go:141] libmachine: About to run SSH command:
hostname
I0127 13:18:14.612217 1391899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42146->127.0.0.1:34227: read: connection reset by peer
I0127 13:18:17.761119 1391899 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-813213
I0127 13:18:17.761150 1391899 ubuntu.go:169] provisioning hostname "old-k8s-version-813213"
I0127 13:18:17.761227 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:17.791317 1391899 main.go:141] libmachine: Using SSH client type: native
I0127 13:18:17.791561 1391899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 34227 <nil> <nil>}
I0127 13:18:17.791580 1391899 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-813213 && echo "old-k8s-version-813213" | sudo tee /etc/hostname
I0127 13:18:17.951706 1391899 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-813213
I0127 13:18:17.951865 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:17.981356 1391899 main.go:141] libmachine: Using SSH client type: native
I0127 13:18:17.981618 1391899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 34227 <nil> <nil>}
I0127 13:18:17.981635 1391899 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-813213' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-813213/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-813213' | sudo tee -a /etc/hosts;
fi
fi
I0127 13:18:18.117544 1391899 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 13:18:18.117573 1391899 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20317-1181389/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-1181389/.minikube}
I0127 13:18:18.117593 1391899 ubuntu.go:177] setting up certificates
I0127 13:18:18.117604 1391899 provision.go:84] configureAuth start
I0127 13:18:18.117675 1391899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-813213
I0127 13:18:18.140023 1391899 provision.go:143] copyHostCerts
I0127 13:18:18.140110 1391899 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.pem, removing ...
I0127 13:18:18.140132 1391899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.pem
I0127 13:18:18.140209 1391899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.pem (1082 bytes)
I0127 13:18:18.140308 1391899 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-1181389/.minikube/cert.pem, removing ...
I0127 13:18:18.140319 1391899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-1181389/.minikube/cert.pem
I0127 13:18:18.140348 1391899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-1181389/.minikube/cert.pem (1123 bytes)
I0127 13:18:18.140407 1391899 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-1181389/.minikube/key.pem, removing ...
I0127 13:18:18.140414 1391899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-1181389/.minikube/key.pem
I0127 13:18:18.140438 1391899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-1181389/.minikube/key.pem (1675 bytes)
I0127 13:18:18.140491 1391899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-813213 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-813213]
I0127 13:18:18.565007 1391899 provision.go:177] copyRemoteCerts
I0127 13:18:18.565099 1391899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 13:18:18.565141 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:18.581838 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:18.673662 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 13:18:18.698715 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0127 13:18:18.722959 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 13:18:18.747579 1391899 provision.go:87] duration metric: took 629.95728ms to configureAuth
I0127 13:18:18.747608 1391899 ubuntu.go:193] setting minikube options for container-runtime
I0127 13:18:18.747802 1391899 config.go:182] Loaded profile config "old-k8s-version-813213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0127 13:18:18.747814 1391899 machine.go:96] duration metric: took 4.170244742s to provisionDockerMachine
I0127 13:18:18.747822 1391899 start.go:293] postStartSetup for "old-k8s-version-813213" (driver="docker")
I0127 13:18:18.747833 1391899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 13:18:18.747897 1391899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 13:18:18.747940 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:18.765180 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:18.855629 1391899 ssh_runner.go:195] Run: cat /etc/os-release
I0127 13:18:18.859732 1391899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0127 13:18:18.859775 1391899 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0127 13:18:18.859786 1391899 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0127 13:18:18.859797 1391899 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0127 13:18:18.859809 1391899 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-1181389/.minikube/addons for local assets ...
I0127 13:18:18.859868 1391899 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-1181389/.minikube/files for local assets ...
I0127 13:18:18.859950 1391899 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/ssl/certs/11867732.pem -> 11867732.pem in /etc/ssl/certs
I0127 13:18:18.860073 1391899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 13:18:18.871076 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/ssl/certs/11867732.pem --> /etc/ssl/certs/11867732.pem (1708 bytes)
I0127 13:18:18.901681 1391899 start.go:296] duration metric: took 153.84267ms for postStartSetup
I0127 13:18:18.901770 1391899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0127 13:18:18.901815 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:18.926057 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:19.018654 1391899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0127 13:18:19.024410 1391899 fix.go:56] duration metric: took 5.006014118s for fixHost
I0127 13:18:19.024432 1391899 start.go:83] releasing machines lock for "old-k8s-version-813213", held for 5.006066358s
I0127 13:18:19.024509 1391899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-813213
I0127 13:18:19.050859 1391899 ssh_runner.go:195] Run: cat /version.json
I0127 13:18:19.050912 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:19.051223 1391899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 13:18:19.051274 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:19.072212 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:19.090258 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:19.176722 1391899 ssh_runner.go:195] Run: systemctl --version
I0127 13:18:19.331346 1391899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0127 13:18:19.336207 1391899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0127 13:18:19.358024 1391899 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0127 13:18:19.358101 1391899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 13:18:19.370592 1391899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0127 13:18:19.370616 1391899 start.go:495] detecting cgroup driver to use...
I0127 13:18:19.370648 1391899 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0127 13:18:19.370696 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 13:18:19.387096 1391899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 13:18:19.403981 1391899 docker.go:217] disabling cri-docker service (if available) ...
I0127 13:18:19.404077 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 13:18:19.418605 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 13:18:19.431651 1391899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 13:18:19.546224 1391899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 13:18:19.661828 1391899 docker.go:233] disabling docker service ...
I0127 13:18:19.661916 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 13:18:19.683055 1391899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 13:18:19.698399 1391899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 13:18:19.798047 1391899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 13:18:19.895332 1391899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 13:18:19.908036 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 13:18:19.924905 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0127 13:18:19.936750 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 13:18:19.954857 1391899 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 13:18:19.955006 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 13:18:19.975264 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 13:18:19.985973 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 13:18:19.996566 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 13:18:20.008030 1391899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 13:18:20.019931 1391899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 13:18:20.031827 1391899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 13:18:20.042822 1391899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 13:18:20.053350 1391899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:18:20.165304 1391899 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 13:18:20.363734 1391899 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 13:18:20.363814 1391899 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 13:18:20.367637 1391899 start.go:563] Will wait 60s for crictl version
I0127 13:18:20.367743 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:18:20.371254 1391899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 13:18:20.429795 1391899 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0127 13:18:20.429905 1391899 ssh_runner.go:195] Run: containerd --version
I0127 13:18:20.457692 1391899 ssh_runner.go:195] Run: containerd --version
I0127 13:18:20.489582 1391899 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
I0127 13:18:20.492602 1391899 cli_runner.go:164] Run: docker network inspect old-k8s-version-813213 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 13:18:20.512007 1391899 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0127 13:18:20.515924 1391899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 13:18:20.527444 1391899 kubeadm.go:883] updating cluster {Name:old-k8s-version-813213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 13:18:20.527569 1391899 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 13:18:20.527628 1391899 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:18:20.572671 1391899 containerd.go:627] all images are preloaded for containerd runtime.
I0127 13:18:20.572694 1391899 containerd.go:534] Images already preloaded, skipping extraction
I0127 13:18:20.572754 1391899 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:18:20.627158 1391899 containerd.go:627] all images are preloaded for containerd runtime.
I0127 13:18:20.627191 1391899 cache_images.go:84] Images are preloaded, skipping loading
I0127 13:18:20.627200 1391899 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0127 13:18:20.627364 1391899 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-813213 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 13:18:20.627488 1391899 ssh_runner.go:195] Run: sudo crictl info
I0127 13:18:20.685832 1391899 cni.go:84] Creating CNI manager for ""
I0127 13:18:20.685860 1391899 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0127 13:18:20.685870 1391899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 13:18:20.685912 1391899 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-813213 NodeName:old-k8s-version-813213 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0127 13:18:20.686081 1391899 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-813213"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 13:18:20.686165 1391899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0127 13:18:20.696130 1391899 binaries.go:44] Found k8s binaries, skipping transfer
I0127 13:18:20.696229 1391899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 13:18:20.705874 1391899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0127 13:18:20.724903 1391899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 13:18:20.746603 1391899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0127 13:18:20.771320 1391899 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0127 13:18:20.774961 1391899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 13:18:20.788295 1391899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:18:20.922044 1391899 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 13:18:20.944827 1391899 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213 for IP: 192.168.76.2
I0127 13:18:20.944846 1391899 certs.go:194] generating shared ca certs ...
I0127 13:18:20.944863 1391899 certs.go:226] acquiring lock for ca certs: {Name:mk935ce1b2e17056c705e5bfeb742a058476d97f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:18:20.945001 1391899 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.key
I0127 13:18:20.945143 1391899 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/proxy-client-ca.key
I0127 13:18:20.945153 1391899 certs.go:256] generating profile certs ...
I0127 13:18:20.945241 1391899 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/client.key
I0127 13:18:20.945306 1391899 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/apiserver.key.9b729343
I0127 13:18:20.945348 1391899 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/proxy-client.key
I0127 13:18:20.945475 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/1186773.pem (1338 bytes)
W0127 13:18:20.945509 1391899 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/1186773_empty.pem, impossibly tiny 0 bytes
I0127 13:18:20.945517 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca-key.pem (1675 bytes)
I0127 13:18:20.945553 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem (1082 bytes)
I0127 13:18:20.945579 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/cert.pem (1123 bytes)
I0127 13:18:20.945600 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/key.pem (1675 bytes)
I0127 13:18:20.945644 1391899 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/ssl/certs/11867732.pem (1708 bytes)
I0127 13:18:20.946378 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 13:18:21.024599 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0127 13:18:21.089591 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 13:18:21.138574 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0127 13:18:21.184382 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0127 13:18:21.230740 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 13:18:21.278096 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 13:18:21.315319 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/old-k8s-version-813213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 13:18:21.348994 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/1186773.pem --> /usr/share/ca-certificates/1186773.pem (1338 bytes)
I0127 13:18:21.394974 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/files/etc/ssl/certs/11867732.pem --> /usr/share/ca-certificates/11867732.pem (1708 bytes)
I0127 13:18:21.437930 1391899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-1181389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 13:18:21.471330 1391899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 13:18:21.494863 1391899 ssh_runner.go:195] Run: openssl version
I0127 13:18:21.502715 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11867732.pem && ln -fs /usr/share/ca-certificates/11867732.pem /etc/ssl/certs/11867732.pem"
I0127 13:18:21.519232 1391899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11867732.pem
I0127 13:18:21.523603 1391899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:39 /usr/share/ca-certificates/11867732.pem
I0127 13:18:21.523716 1391899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11867732.pem
I0127 13:18:21.532403 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11867732.pem /etc/ssl/certs/3ec20f2e.0"
I0127 13:18:21.547144 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 13:18:21.559394 1391899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 13:18:21.563333 1391899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:32 /usr/share/ca-certificates/minikubeCA.pem
I0127 13:18:21.563444 1391899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 13:18:21.570864 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 13:18:21.591303 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1186773.pem && ln -fs /usr/share/ca-certificates/1186773.pem /etc/ssl/certs/1186773.pem"
I0127 13:18:21.603902 1391899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1186773.pem
I0127 13:18:21.608985 1391899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:39 /usr/share/ca-certificates/1186773.pem
I0127 13:18:21.609082 1391899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1186773.pem
I0127 13:18:21.617355 1391899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1186773.pem /etc/ssl/certs/51391683.0"
I0127 13:18:21.627522 1391899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 13:18:21.631864 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 13:18:21.640220 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 13:18:21.647806 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 13:18:21.657403 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 13:18:21.666114 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 13:18:21.673971 1391899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 13:18:21.681239 1391899 kubeadm.go:392] StartCluster: {Name:old-k8s-version-813213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:18:21.681339 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 13:18:21.681408 1391899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 13:18:21.732524 1391899 cri.go:89] found id: "2a6b3575611924ecc133f42914e9bdfa06e687ead6ff13a333feb19a4af6a6b0"
I0127 13:18:21.732554 1391899 cri.go:89] found id: "9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
I0127 13:18:21.732560 1391899 cri.go:89] found id: "8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
I0127 13:18:21.732563 1391899 cri.go:89] found id: "2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
I0127 13:18:21.732566 1391899 cri.go:89] found id: "dd1129a7857e46456ebb67cbdb035eeee9a90ede69ebab5467267e962c2ff88e"
I0127 13:18:21.732570 1391899 cri.go:89] found id: "4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
I0127 13:18:21.732573 1391899 cri.go:89] found id: "fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
I0127 13:18:21.732584 1391899 cri.go:89] found id: "dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
I0127 13:18:21.732589 1391899 cri.go:89] found id: "f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
I0127 13:18:21.732597 1391899 cri.go:89] found id: ""
I0127 13:18:21.732650 1391899 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 13:18:21.749320 1391899 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T13:18:21Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 13:18:21.749414 1391899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 13:18:21.761206 1391899 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 13:18:21.761227 1391899 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 13:18:21.761281 1391899 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 13:18:21.772201 1391899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 13:18:21.772663 1391899 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-813213" does not appear in /home/jenkins/minikube-integration/20317-1181389/kubeconfig
I0127 13:18:21.772774 1391899 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-1181389/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-813213" cluster setting kubeconfig missing "old-k8s-version-813213" context setting]
I0127 13:18:21.773092 1391899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-1181389/kubeconfig: {Name:mk592f9fdf35ac90774b473f4b93a1c13d4536fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:18:21.774350 1391899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 13:18:21.785760 1391899 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0127 13:18:21.785806 1391899 kubeadm.go:597] duration metric: took 24.563065ms to restartPrimaryControlPlane
I0127 13:18:21.785817 1391899 kubeadm.go:394] duration metric: took 104.588715ms to StartCluster
I0127 13:18:21.785833 1391899 settings.go:142] acquiring lock: {Name:mk65fea0c0d05cbe7dd04ab1bf6947a1297febb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:18:21.785891 1391899 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20317-1181389/kubeconfig
I0127 13:18:21.786506 1391899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-1181389/kubeconfig: {Name:mk592f9fdf35ac90774b473f4b93a1c13d4536fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:18:21.786688 1391899 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 13:18:21.786983 1391899 config.go:182] Loaded profile config "old-k8s-version-813213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0127 13:18:21.787030 1391899 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 13:18:21.787102 1391899 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-813213"
I0127 13:18:21.787119 1391899 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-813213"
W0127 13:18:21.787129 1391899 addons.go:247] addon storage-provisioner should already be in state true
I0127 13:18:21.787152 1391899 host.go:66] Checking if "old-k8s-version-813213" exists ...
I0127 13:18:21.787158 1391899 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-813213"
I0127 13:18:21.787179 1391899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-813213"
I0127 13:18:21.787477 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
I0127 13:18:21.787613 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
I0127 13:18:21.791474 1391899 addons.go:69] Setting dashboard=true in profile "old-k8s-version-813213"
I0127 13:18:21.791505 1391899 addons.go:238] Setting addon dashboard=true in "old-k8s-version-813213"
W0127 13:18:21.791513 1391899 addons.go:247] addon dashboard should already be in state true
I0127 13:18:21.791551 1391899 host.go:66] Checking if "old-k8s-version-813213" exists ...
I0127 13:18:21.792025 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
I0127 13:18:21.792180 1391899 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-813213"
I0127 13:18:21.792192 1391899 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-813213"
W0127 13:18:21.792198 1391899 addons.go:247] addon metrics-server should already be in state true
I0127 13:18:21.792220 1391899 host.go:66] Checking if "old-k8s-version-813213" exists ...
I0127 13:18:21.792629 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
I0127 13:18:21.793553 1391899 out.go:177] * Verifying Kubernetes components...
I0127 13:18:21.801229 1391899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:18:21.847138 1391899 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-813213"
W0127 13:18:21.847211 1391899 addons.go:247] addon default-storageclass should already be in state true
I0127 13:18:21.847266 1391899 host.go:66] Checking if "old-k8s-version-813213" exists ...
I0127 13:18:21.847825 1391899 cli_runner.go:164] Run: docker container inspect old-k8s-version-813213 --format={{.State.Status}}
I0127 13:18:21.856022 1391899 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 13:18:21.858976 1391899 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:18:21.859000 1391899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 13:18:21.859068 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:21.869118 1391899 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 13:18:21.877979 1391899 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 13:18:21.880689 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 13:18:21.880716 1391899 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 13:18:21.880783 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:21.881101 1391899 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 13:18:21.885114 1391899 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 13:18:21.885141 1391899 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 13:18:21.885209 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:21.921571 1391899 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 13:18:21.921590 1391899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 13:18:21.921653 1391899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-813213
I0127 13:18:21.925076 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:21.973202 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:21.980900 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:21.984807 1391899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/20317-1181389/.minikube/machines/old-k8s-version-813213/id_rsa Username:docker}
I0127 13:18:22.025688 1391899 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 13:18:22.073153 1391899 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-813213" to be "Ready" ...
I0127 13:18:22.134540 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:18:22.221781 1391899 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 13:18:22.221842 1391899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 13:18:22.242405 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 13:18:22.262820 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 13:18:22.262848 1391899 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 13:18:22.370327 1391899 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 13:18:22.370357 1391899 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 13:18:22.432937 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 13:18:22.432966 1391899 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 13:18:22.481835 1391899 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:18:22.481866 1391899 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
W0127 13:18:22.514178 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.514222 1391899 retry.go:31] will retry after 371.986766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0127 13:18:22.514297 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.514310 1391899 retry.go:31] will retry after 132.374168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.519845 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 13:18:22.519873 1391899 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 13:18:22.544549 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:18:22.548375 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 13:18:22.548401 1391899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 13:18:22.591235 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 13:18:22.591264 1391899 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 13:18:22.642244 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 13:18:22.642277 1391899 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 13:18:22.647478 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0127 13:18:22.674743 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 13:18:22.674769 1391899 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
W0127 13:18:22.769596 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.769640 1391899 retry.go:31] will retry after 221.181127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.775214 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 13:18:22.775240 1391899 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0127 13:18:22.815722 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.815755 1391899 retry.go:31] will retry after 525.933911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.816354 1391899 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:18:22.816383 1391899 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 13:18:22.838875 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:18:22.887047 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0127 13:18:22.986541 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.986583 1391899 retry.go:31] will retry after 345.413306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:22.991919 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0127 13:18:23.003252 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.003293 1391899 retry.go:31] will retry after 469.093804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0127 13:18:23.071095 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.071128 1391899 retry.go:31] will retry after 456.595826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.333084 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:18:23.342427 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0127 13:18:23.458383 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.458464 1391899 retry.go:31] will retry after 487.031074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.472584 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0127 13:18:23.499436 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.499514 1391899 retry.go:31] will retry after 365.36057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.528630 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0127 13:18:23.558505 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.558594 1391899 retry.go:31] will retry after 486.935563ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0127 13:18:23.612131 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.612166 1391899 retry.go:31] will retry after 447.709657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.865847 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0127 13:18:23.942965 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.942997 1391899 retry.go:31] will retry after 986.4987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:23.946119 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0127 13:18:24.019401 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:24.019487 1391899 retry.go:31] will retry after 570.089089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:24.046696 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:18:24.060095 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:18:24.074190 1391899 node_ready.go:53] error getting node "old-k8s-version-813213": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-813213": dial tcp 192.168.76.2:8443: connect: connection refused
W0127 13:18:24.156609 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:24.156704 1391899 retry.go:31] will retry after 1.164313936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0127 13:18:24.173550 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:24.173587 1391899 retry.go:31] will retry after 456.559808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:24.590593 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:18:24.630504 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0127 13:18:24.683816 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:24.683910 1391899 retry.go:31] will retry after 846.273649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0127 13:18:24.730383 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:24.730416 1391899 retry.go:31] will retry after 1.83841666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:24.930556 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0127 13:18:25.022371 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:25.022407 1391899 retry.go:31] will retry after 1.62228137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:25.321247 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0127 13:18:25.461529 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:25.461563 1391899 retry.go:31] will retry after 1.585764216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:25.532489 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0127 13:18:25.684236 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:25.684272 1391899 retry.go:31] will retry after 765.340172ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:26.450751 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:18:26.569021 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:18:26.574622 1391899 node_ready.go:53] error getting node "old-k8s-version-813213": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-813213": dial tcp 192.168.76.2:8443: connect: connection refused
W0127 13:18:26.584123 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:26.584159 1391899 retry.go:31] will retry after 2.709365195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:26.645498 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0127 13:18:26.726863 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:26.726894 1391899 retry.go:31] will retry after 1.411182598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0127 13:18:26.811512 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:26.811546 1391899 retry.go:31] will retry after 1.224324798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:27.047943 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0127 13:18:27.184126 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:27.184164 1391899 retry.go:31] will retry after 2.443074526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:28.036806 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0127 13:18:28.138908 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0127 13:18:28.178121 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:28.178163 1391899 retry.go:31] will retry after 3.72387347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0127 13:18:28.245652 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:28.245685 1391899 retry.go:31] will retry after 3.277610879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:29.073867 1391899 node_ready.go:53] error getting node "old-k8s-version-813213": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-813213": dial tcp 192.168.76.2:8443: connect: connection refused
I0127 13:18:29.294230 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0127 13:18:29.386769 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:29.386807 1391899 retry.go:31] will retry after 1.487273331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:29.627592 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0127 13:18:29.715943 1391899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:29.715973 1391899 retry.go:31] will retry after 3.225684221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0127 13:18:30.875053 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:18:31.524011 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:18:31.902602 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0127 13:18:32.942230 1391899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:18:37.990947 1391899 node_ready.go:49] node "old-k8s-version-813213" has status "Ready":"True"
I0127 13:18:37.990969 1391899 node_ready.go:38] duration metric: took 15.917732997s for node "old-k8s-version-813213" to be "Ready" ...
I0127 13:18:37.990981 1391899 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:18:38.186022 1391899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-2phj4" in "kube-system" namespace to be "Ready" ...
I0127 13:18:38.445834 1391899 pod_ready.go:93] pod "coredns-74ff55c5b-2phj4" in "kube-system" namespace has status "Ready":"True"
I0127 13:18:38.445912 1391899 pod_ready.go:82] duration metric: took 259.799034ms for pod "coredns-74ff55c5b-2phj4" in "kube-system" namespace to be "Ready" ...
I0127 13:18:38.445941 1391899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
I0127 13:18:38.602382 1391899 pod_ready.go:93] pod "etcd-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"True"
I0127 13:18:38.602458 1391899 pod_ready.go:82] duration metric: took 156.496199ms for pod "etcd-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
I0127 13:18:38.602487 1391899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
I0127 13:18:38.687674 1391899 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"True"
I0127 13:18:38.687746 1391899 pod_ready.go:82] duration metric: took 85.233342ms for pod "kube-apiserver-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
I0127 13:18:38.687774 1391899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
I0127 13:18:40.373077 1391899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.497978147s)
I0127 13:18:40.373298 1391899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.470660537s)
I0127 13:18:40.373352 1391899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.431095243s)
I0127 13:18:40.373249 1391899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.849200936s)
I0127 13:18:40.373549 1391899 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-813213"
I0127 13:18:40.376620 1391899 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-813213 addons enable metrics-server
I0127 13:18:40.383402 1391899 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I0127 13:18:40.386249 1391899 addons.go:514] duration metric: took 18.599198833s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I0127 13:18:40.695578 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:18:43.194602 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:18:45.195216 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:18:47.703148 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:18:50.195122 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:18:52.198218 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:18:54.717913 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:18:57.195944 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:18:59.702848 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:01.705357 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:04.195657 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:06.710345 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:09.195156 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:11.716563 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:13.729824 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:15.734665 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:18.195277 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:20.195878 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:22.702107 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:24.708578 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:27.195004 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:29.698045 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:31.723407 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:34.195461 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:36.696193 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:38.700026 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:40.700671 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:42.703385 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:45.198443 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:47.701619 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:50.195921 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:52.702269 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:55.194404 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:57.197603 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:19:59.198142 1391899 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:01.703694 1391899 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"True"
I0127 13:20:01.703725 1391899 pod_ready.go:82] duration metric: took 1m23.015929707s for pod "kube-controller-manager-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
I0127 13:20:01.703742 1391899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8gl5q" in "kube-system" namespace to be "Ready" ...
I0127 13:20:01.719573 1391899 pod_ready.go:93] pod "kube-proxy-8gl5q" in "kube-system" namespace has status "Ready":"True"
I0127 13:20:01.719606 1391899 pod_ready.go:82] duration metric: took 15.853882ms for pod "kube-proxy-8gl5q" in "kube-system" namespace to be "Ready" ...
I0127 13:20:01.719619 1391899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
I0127 13:20:01.725917 1391899 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-813213" in "kube-system" namespace has status "Ready":"True"
I0127 13:20:01.725949 1391899 pod_ready.go:82] duration metric: took 6.319702ms for pod "kube-scheduler-old-k8s-version-813213" in "kube-system" namespace to be "Ready" ...
I0127 13:20:01.725991 1391899 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace to be "Ready" ...
I0127 13:20:03.734623 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:06.232985 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:08.732495 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:11.233232 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:13.732159 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:15.733675 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:18.232203 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:20.232685 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:22.732265 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:24.732863 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:27.233069 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:29.732704 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:31.733718 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:33.736633 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:36.233201 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:38.731490 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:40.732887 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:43.232275 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:45.236944 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:47.732065 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:50.232010 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:52.232448 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:54.232827 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:56.732053 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:20:59.232264 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:01.233320 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:03.737362 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:06.231441 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:08.232882 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:10.732077 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:13.233948 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:15.732599 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:18.231469 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:20.232103 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:22.232453 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:24.237017 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:26.731921 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:28.732277 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:30.732821 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:33.233328 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:35.733530 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:38.232585 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:40.232822 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:42.233527 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:44.238453 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:46.732257 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:48.732687 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:51.233168 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:53.736801 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:56.232874 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:21:58.732509 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:00.737131 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:03.232057 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:05.232168 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:07.232237 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:09.233131 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:11.732796 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:13.733519 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:16.233219 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:18.731601 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:20.732672 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:22.732940 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:25.232064 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:27.232892 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:29.733136 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:32.233322 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:34.732891 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:37.232324 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:39.233286 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:41.732415 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:43.735768 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:46.233227 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:48.732309 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:50.732483 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:53.232717 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:55.732466 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:22:57.734556 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:00.277950 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:02.731248 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:04.745820 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:07.231990 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:09.732045 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:12.232594 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:14.233202 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:16.233583 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:18.732156 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:20.734939 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:23.234544 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:25.731660 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:27.733192 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:30.232285 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:32.233139 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:34.731625 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:37.232304 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:39.233314 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:41.732103 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:43.732236 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:45.732819 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:48.232215 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:50.232271 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:52.232582 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:54.232901 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:56.233125 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:23:58.733645 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:01.235465 1391899 pod_ready.go:103] pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:01.726375 1391899 pod_ready.go:82] duration metric: took 4m0.000363583s for pod "metrics-server-9975d5f86-gkxmm" in "kube-system" namespace to be "Ready" ...
E0127 13:24:01.726466 1391899 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0127 13:24:01.726482 1391899 pod_ready.go:39] duration metric: took 5m23.735489594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:24:01.726533 1391899 api_server.go:52] waiting for apiserver process to appear ...
I0127 13:24:01.726612 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0127 13:24:01.726717 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0127 13:24:01.764560 1391899 cri.go:89] found id: "9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
I0127 13:24:01.764749 1391899 cri.go:89] found id: "dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
I0127 13:24:01.764845 1391899 cri.go:89] found id: ""
I0127 13:24:01.764872 1391899 logs.go:282] 2 containers: [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba]
I0127 13:24:01.764982 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.769278 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.773146 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0127 13:24:01.773218 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0127 13:24:01.818544 1391899 cri.go:89] found id: "207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
I0127 13:24:01.818568 1391899 cri.go:89] found id: "f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
I0127 13:24:01.818574 1391899 cri.go:89] found id: ""
I0127 13:24:01.818581 1391899 logs.go:282] 2 containers: [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5]
I0127 13:24:01.818652 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.822831 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.826198 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0127 13:24:01.826281 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0127 13:24:01.865471 1391899 cri.go:89] found id: "6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
I0127 13:24:01.865537 1391899 cri.go:89] found id: "9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
I0127 13:24:01.865557 1391899 cri.go:89] found id: ""
I0127 13:24:01.865580 1391899 logs.go:282] 2 containers: [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c]
I0127 13:24:01.865647 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.873778 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.878357 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0127 13:24:01.878469 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0127 13:24:01.919284 1391899 cri.go:89] found id: "498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
I0127 13:24:01.919308 1391899 cri.go:89] found id: "4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
I0127 13:24:01.919312 1391899 cri.go:89] found id: ""
I0127 13:24:01.919320 1391899 logs.go:282] 2 containers: [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc]
I0127 13:24:01.919395 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.922958 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.926473 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0127 13:24:01.926545 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0127 13:24:01.969401 1391899 cri.go:89] found id: "53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
I0127 13:24:01.969467 1391899 cri.go:89] found id: "2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
I0127 13:24:01.969485 1391899 cri.go:89] found id: ""
I0127 13:24:01.969509 1391899 logs.go:282] 2 containers: [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6]
I0127 13:24:01.969583 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.973199 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:01.976743 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0127 13:24:01.976815 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0127 13:24:02.032067 1391899 cri.go:89] found id: "348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
I0127 13:24:02.032090 1391899 cri.go:89] found id: "fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
I0127 13:24:02.032096 1391899 cri.go:89] found id: ""
I0127 13:24:02.032103 1391899 logs.go:282] 2 containers: [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13]
I0127 13:24:02.032162 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:02.036128 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:02.039776 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0127 13:24:02.039886 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0127 13:24:02.091708 1391899 cri.go:89] found id: "98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
I0127 13:24:02.091730 1391899 cri.go:89] found id: "8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
I0127 13:24:02.091735 1391899 cri.go:89] found id: ""
I0127 13:24:02.091741 1391899 logs.go:282] 2 containers: [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7]
I0127 13:24:02.091855 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:02.095627 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:02.098976 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0127 13:24:02.099051 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0127 13:24:02.141711 1391899 cri.go:89] found id: "ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
I0127 13:24:02.141735 1391899 cri.go:89] found id: "eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
I0127 13:24:02.141741 1391899 cri.go:89] found id: ""
I0127 13:24:02.141756 1391899 logs.go:282] 2 containers: [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c]
I0127 13:24:02.141815 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:02.146300 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:02.149876 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0127 13:24:02.149945 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0127 13:24:02.193944 1391899 cri.go:89] found id: "84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
I0127 13:24:02.194026 1391899 cri.go:89] found id: ""
I0127 13:24:02.194041 1391899 logs.go:282] 1 containers: [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1]
I0127 13:24:02.194119 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:02.198663 1391899 logs.go:123] Gathering logs for dmesg ...
I0127 13:24:02.198689 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0127 13:24:02.216133 1391899 logs.go:123] Gathering logs for describe nodes ...
I0127 13:24:02.216169 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0127 13:24:02.373008 1391899 logs.go:123] Gathering logs for kube-apiserver [dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba] ...
I0127 13:24:02.373056 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
I0127 13:24:02.431971 1391899 logs.go:123] Gathering logs for coredns [9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c] ...
I0127 13:24:02.432005 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
I0127 13:24:02.475356 1391899 logs.go:123] Gathering logs for storage-provisioner [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9] ...
I0127 13:24:02.475383 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
I0127 13:24:02.514117 1391899 logs.go:123] Gathering logs for storage-provisioner [eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c] ...
I0127 13:24:02.514145 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
I0127 13:24:02.561620 1391899 logs.go:123] Gathering logs for kubernetes-dashboard [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1] ...
I0127 13:24:02.561649 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
I0127 13:24:02.602626 1391899 logs.go:123] Gathering logs for kubelet ...
I0127 13:24:02.602653 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0127 13:24:02.668629 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.161443 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:02.670001 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.742912 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.673484 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:52 old-k8s-version-813213 kubelet[662]: E0127 13:18:52.582413 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:02.675905 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:00 old-k8s-version-813213 kubelet[662]: E0127 13:19:00.834386 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.676454 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:01 old-k8s-version-813213 kubelet[662]: E0127 13:19:01.844499 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.676766 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:04 old-k8s-version-813213 kubelet[662]: E0127 13:19:04.569022 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.677317 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:08 old-k8s-version-813213 kubelet[662]: E0127 13:19:08.040193 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.678628 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:10 old-k8s-version-813213 kubelet[662]: E0127 13:19:10.866611 662 pod_workers.go:191] Error syncing pod b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5 ("storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"
W0127 13:24:02.682650 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:16 old-k8s-version-813213 kubelet[662]: E0127 13:19:16.578183 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:02.683806 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:21 old-k8s-version-813213 kubelet[662]: E0127 13:19:21.914690 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.684306 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:28 old-k8s-version-813213 kubelet[662]: E0127 13:19:28.040605 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.684519 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:31 old-k8s-version-813213 kubelet[662]: E0127 13:19:31.569400 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.684888 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:40 old-k8s-version-813213 kubelet[662]: E0127 13:19:40.569017 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.685153 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:46 old-k8s-version-813213 kubelet[662]: E0127 13:19:46.569235 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.685873 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:54 old-k8s-version-813213 kubelet[662]: E0127 13:19:54.998749 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.686236 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:58 old-k8s-version-813213 kubelet[662]: E0127 13:19:58.040157 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.688891 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:00 old-k8s-version-813213 kubelet[662]: E0127 13:20:00.591029 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:02.689278 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:08 old-k8s-version-813213 kubelet[662]: E0127 13:20:08.568750 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.689490 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:14 old-k8s-version-813213 kubelet[662]: E0127 13:20:14.569356 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.689843 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:20 old-k8s-version-813213 kubelet[662]: E0127 13:20:20.568757 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.690056 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:25 old-k8s-version-813213 kubelet[662]: E0127 13:20:25.569626 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.690407 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:33 old-k8s-version-813213 kubelet[662]: E0127 13:20:33.569406 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.690618 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:37 old-k8s-version-813213 kubelet[662]: E0127 13:20:37.569368 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.691265 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:45 old-k8s-version-813213 kubelet[662]: E0127 13:20:45.161804 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.691625 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:48 old-k8s-version-813213 kubelet[662]: E0127 13:20:48.040150 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.691850 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:51 old-k8s-version-813213 kubelet[662]: E0127 13:20:51.569490 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.692202 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:02 old-k8s-version-813213 kubelet[662]: E0127 13:21:02.568801 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.692408 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:04 old-k8s-version-813213 kubelet[662]: E0127 13:21:04.569222 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.692619 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:15 old-k8s-version-813213 kubelet[662]: E0127 13:21:15.569338 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.692971 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:16 old-k8s-version-813213 kubelet[662]: E0127 13:21:16.568781 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.693356 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.569462 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.695822 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.578384 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:02.696186 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:41 old-k8s-version-813213 kubelet[662]: E0127 13:21:41.569455 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.696395 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:42 old-k8s-version-813213 kubelet[662]: E0127 13:21:42.569422 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.696746 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:52 old-k8s-version-813213 kubelet[662]: E0127 13:21:52.568642 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.696954 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:57 old-k8s-version-813213 kubelet[662]: E0127 13:21:57.570339 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.697320 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:03 old-k8s-version-813213 kubelet[662]: E0127 13:22:03.569863 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.697530 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:12 old-k8s-version-813213 kubelet[662]: E0127 13:22:12.569369 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.698242 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:15 old-k8s-version-813213 kubelet[662]: E0127 13:22:15.386979 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.698602 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:18 old-k8s-version-813213 kubelet[662]: E0127 13:22:18.040158 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.698812 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:26 old-k8s-version-813213 kubelet[662]: E0127 13:22:26.569341 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.699199 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:30 old-k8s-version-813213 kubelet[662]: E0127 13:22:30.568782 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.699410 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:37 old-k8s-version-813213 kubelet[662]: E0127 13:22:37.572662 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.699761 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:41 old-k8s-version-813213 kubelet[662]: E0127 13:22:41.568879 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.699978 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:49 old-k8s-version-813213 kubelet[662]: E0127 13:22:49.569740 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.700330 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:56 old-k8s-version-813213 kubelet[662]: E0127 13:22:56.568752 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.700537 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:03 old-k8s-version-813213 kubelet[662]: E0127 13:23:03.569135 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.700905 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: E0127 13:23:11.569770 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.701163 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:16 old-k8s-version-813213 kubelet[662]: E0127 13:23:16.569194 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.701580 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: E0127 13:23:26.568839 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.701771 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.702095 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.702276 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:02.702598 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:02.702777 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0127 13:24:02.702788 1391899 logs.go:123] Gathering logs for kube-scheduler [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53] ...
I0127 13:24:02.702805 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
I0127 13:24:02.750568 1391899 logs.go:123] Gathering logs for kube-controller-manager [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6] ...
I0127 13:24:02.750648 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
I0127 13:24:02.818023 1391899 logs.go:123] Gathering logs for kindnet [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7] ...
I0127 13:24:02.818058 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
I0127 13:24:02.860506 1391899 logs.go:123] Gathering logs for kube-apiserver [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7] ...
I0127 13:24:02.860534 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
I0127 13:24:02.930144 1391899 logs.go:123] Gathering logs for kube-scheduler [4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc] ...
I0127 13:24:02.930197 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
I0127 13:24:02.976555 1391899 logs.go:123] Gathering logs for containerd ...
I0127 13:24:02.976587 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0127 13:24:03.038872 1391899 logs.go:123] Gathering logs for etcd [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f] ...
I0127 13:24:03.038913 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
I0127 13:24:03.089942 1391899 logs.go:123] Gathering logs for coredns [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579] ...
I0127 13:24:03.089974 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
I0127 13:24:03.132438 1391899 logs.go:123] Gathering logs for kube-proxy [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676] ...
I0127 13:24:03.132467 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
I0127 13:24:03.183093 1391899 logs.go:123] Gathering logs for kube-proxy [2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6] ...
I0127 13:24:03.183121 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
I0127 13:24:03.223735 1391899 logs.go:123] Gathering logs for kube-controller-manager [fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13] ...
I0127 13:24:03.223763 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
I0127 13:24:03.284290 1391899 logs.go:123] Gathering logs for kindnet [8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7] ...
I0127 13:24:03.284363 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
I0127 13:24:03.334184 1391899 logs.go:123] Gathering logs for container status ...
I0127 13:24:03.334221 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0127 13:24:03.377673 1391899 logs.go:123] Gathering logs for etcd [f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5] ...
I0127 13:24:03.377703 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
I0127 13:24:03.418187 1391899 out.go:358] Setting ErrFile to fd 2...
I0127 13:24:03.418214 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0127 13:24:03.418270 1391899 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0127 13:24:03.418286 1391899 out.go:270] Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:03.418299 1391899 out.go:270] Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:03.418306 1391899 out.go:270] Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:03.418317 1391899 out.go:270] Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:03.418325 1391899 out.go:270] Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0127 13:24:03.418337 1391899 out.go:358] Setting ErrFile to fd 2...
I0127 13:24:03.418343 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:24:13.421468 1391899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:24:13.436058 1391899 api_server.go:72] duration metric: took 5m51.649333782s to wait for apiserver process to appear ...
I0127 13:24:13.436095 1391899 api_server.go:88] waiting for apiserver healthz status ...
I0127 13:24:13.436141 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0127 13:24:13.436204 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0127 13:24:13.494095 1391899 cri.go:89] found id: "9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
I0127 13:24:13.494129 1391899 cri.go:89] found id: "dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
I0127 13:24:13.494135 1391899 cri.go:89] found id: ""
I0127 13:24:13.494145 1391899 logs.go:282] 2 containers: [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba]
I0127 13:24:13.494216 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.498830 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.503370 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0127 13:24:13.503440 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0127 13:24:13.566783 1391899 cri.go:89] found id: "207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
I0127 13:24:13.566804 1391899 cri.go:89] found id: "f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
I0127 13:24:13.566809 1391899 cri.go:89] found id: ""
I0127 13:24:13.566815 1391899 logs.go:282] 2 containers: [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5]
I0127 13:24:13.566884 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.571754 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.579722 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0127 13:24:13.579801 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0127 13:24:13.636050 1391899 cri.go:89] found id: "6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
I0127 13:24:13.636069 1391899 cri.go:89] found id: "9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
I0127 13:24:13.636074 1391899 cri.go:89] found id: ""
I0127 13:24:13.636081 1391899 logs.go:282] 2 containers: [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c]
I0127 13:24:13.636140 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.641250 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.645845 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0127 13:24:13.645910 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0127 13:24:13.730109 1391899 cri.go:89] found id: "498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
I0127 13:24:13.730126 1391899 cri.go:89] found id: "4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
I0127 13:24:13.730131 1391899 cri.go:89] found id: ""
I0127 13:24:13.730138 1391899 logs.go:282] 2 containers: [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc]
I0127 13:24:13.730188 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.735061 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.739961 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0127 13:24:13.740030 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0127 13:24:13.793549 1391899 cri.go:89] found id: "53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
I0127 13:24:13.793568 1391899 cri.go:89] found id: "2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
I0127 13:24:13.793573 1391899 cri.go:89] found id: ""
I0127 13:24:13.793580 1391899 logs.go:282] 2 containers: [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6]
I0127 13:24:13.793635 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.798974 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.803128 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0127 13:24:13.803199 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0127 13:24:13.865547 1391899 cri.go:89] found id: "348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
I0127 13:24:13.865586 1391899 cri.go:89] found id: "fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
I0127 13:24:13.865591 1391899 cri.go:89] found id: ""
I0127 13:24:13.865597 1391899 logs.go:282] 2 containers: [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13]
I0127 13:24:13.865654 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.869602 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.873071 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0127 13:24:13.873189 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0127 13:24:13.920522 1391899 cri.go:89] found id: "98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
I0127 13:24:13.920541 1391899 cri.go:89] found id: "8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
I0127 13:24:13.920546 1391899 cri.go:89] found id: ""
I0127 13:24:13.920553 1391899 logs.go:282] 2 containers: [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7]
I0127 13:24:13.920606 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.924728 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.928717 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0127 13:24:13.928777 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0127 13:24:13.981265 1391899 cri.go:89] found id: "ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
I0127 13:24:13.981289 1391899 cri.go:89] found id: "eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
I0127 13:24:13.981294 1391899 cri.go:89] found id: ""
I0127 13:24:13.981300 1391899 logs.go:282] 2 containers: [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c]
I0127 13:24:13.981386 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.985260 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.988991 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0127 13:24:13.989131 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0127 13:24:14.041772 1391899 cri.go:89] found id: "84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
I0127 13:24:14.041793 1391899 cri.go:89] found id: ""
I0127 13:24:14.041801 1391899 logs.go:282] 1 containers: [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1]
I0127 13:24:14.041860 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:14.045758 1391899 logs.go:123] Gathering logs for kube-apiserver [dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba] ...
I0127 13:24:14.045783 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
I0127 13:24:14.119271 1391899 logs.go:123] Gathering logs for coredns [9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c] ...
I0127 13:24:14.119329 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
I0127 13:24:14.184713 1391899 logs.go:123] Gathering logs for dmesg ...
I0127 13:24:14.184744 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0127 13:24:14.205804 1391899 logs.go:123] Gathering logs for describe nodes ...
I0127 13:24:14.205839 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0127 13:24:14.425188 1391899 logs.go:123] Gathering logs for kube-apiserver [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7] ...
I0127 13:24:14.425269 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
I0127 13:24:14.513059 1391899 logs.go:123] Gathering logs for kindnet [8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7] ...
I0127 13:24:14.513133 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
I0127 13:24:14.569064 1391899 logs.go:123] Gathering logs for storage-provisioner [eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c] ...
I0127 13:24:14.569092 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
I0127 13:24:14.641486 1391899 logs.go:123] Gathering logs for kubelet ...
I0127 13:24:14.641555 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0127 13:24:14.715359 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.161443 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.716229 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.742912 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.719582 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:52 old-k8s-version-813213 kubelet[662]: E0127 13:18:52.582413 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.721888 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:00 old-k8s-version-813213 kubelet[662]: E0127 13:19:00.834386 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.722259 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:01 old-k8s-version-813213 kubelet[662]: E0127 13:19:01.844499 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.722476 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:04 old-k8s-version-813213 kubelet[662]: E0127 13:19:04.569022 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.722844 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:08 old-k8s-version-813213 kubelet[662]: E0127 13:19:08.040193 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.723653 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:10 old-k8s-version-813213 kubelet[662]: E0127 13:19:10.866611 662 pod_workers.go:191] Error syncing pod b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5 ("storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"
W0127 13:24:14.726364 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:16 old-k8s-version-813213 kubelet[662]: E0127 13:19:16.578183 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.727343 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:21 old-k8s-version-813213 kubelet[662]: E0127 13:19:21.914690 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.727830 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:28 old-k8s-version-813213 kubelet[662]: E0127 13:19:28.040605 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.728048 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:31 old-k8s-version-813213 kubelet[662]: E0127 13:19:31.569400 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.728403 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:40 old-k8s-version-813213 kubelet[662]: E0127 13:19:40.569017 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.728614 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:46 old-k8s-version-813213 kubelet[662]: E0127 13:19:46.569235 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.729266 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:54 old-k8s-version-813213 kubelet[662]: E0127 13:19:54.998749 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.729680 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:58 old-k8s-version-813213 kubelet[662]: E0127 13:19:58.040157 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.732147 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:00 old-k8s-version-813213 kubelet[662]: E0127 13:20:00.591029 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.732514 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:08 old-k8s-version-813213 kubelet[662]: E0127 13:20:08.568750 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.732722 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:14 old-k8s-version-813213 kubelet[662]: E0127 13:20:14.569356 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.733086 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:20 old-k8s-version-813213 kubelet[662]: E0127 13:20:20.568757 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.733291 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:25 old-k8s-version-813213 kubelet[662]: E0127 13:20:25.569626 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.733665 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:33 old-k8s-version-813213 kubelet[662]: E0127 13:20:33.569406 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.733881 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:37 old-k8s-version-813213 kubelet[662]: E0127 13:20:37.569368 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.734571 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:45 old-k8s-version-813213 kubelet[662]: E0127 13:20:45.161804 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.734937 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:48 old-k8s-version-813213 kubelet[662]: E0127 13:20:48.040150 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.735156 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:51 old-k8s-version-813213 kubelet[662]: E0127 13:20:51.569490 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.735527 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:02 old-k8s-version-813213 kubelet[662]: E0127 13:21:02.568801 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.735733 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:04 old-k8s-version-813213 kubelet[662]: E0127 13:21:04.569222 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.735945 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:15 old-k8s-version-813213 kubelet[662]: E0127 13:21:15.569338 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.736307 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:16 old-k8s-version-813213 kubelet[662]: E0127 13:21:16.568781 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.736659 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.569462 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.739220 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.578384 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.739590 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:41 old-k8s-version-813213 kubelet[662]: E0127 13:21:41.569455 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.739807 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:42 old-k8s-version-813213 kubelet[662]: E0127 13:21:42.569422 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.740162 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:52 old-k8s-version-813213 kubelet[662]: E0127 13:21:52.568642 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.740367 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:57 old-k8s-version-813213 kubelet[662]: E0127 13:21:57.570339 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.740713 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:03 old-k8s-version-813213 kubelet[662]: E0127 13:22:03.569863 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.740923 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:12 old-k8s-version-813213 kubelet[662]: E0127 13:22:12.569369 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.741548 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:15 old-k8s-version-813213 kubelet[662]: E0127 13:22:15.386979 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.741905 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:18 old-k8s-version-813213 kubelet[662]: E0127 13:22:18.040158 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.742108 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:26 old-k8s-version-813213 kubelet[662]: E0127 13:22:26.569341 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.742460 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:30 old-k8s-version-813213 kubelet[662]: E0127 13:22:30.568782 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.742682 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:37 old-k8s-version-813213 kubelet[662]: E0127 13:22:37.572662 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.743095 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:41 old-k8s-version-813213 kubelet[662]: E0127 13:22:41.568879 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.743281 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:49 old-k8s-version-813213 kubelet[662]: E0127 13:22:49.569740 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.743626 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:56 old-k8s-version-813213 kubelet[662]: E0127 13:22:56.568752 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.743813 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:03 old-k8s-version-813213 kubelet[662]: E0127 13:23:03.569135 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.744134 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: E0127 13:23:11.569770 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.744341 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:16 old-k8s-version-813213 kubelet[662]: E0127 13:23:16.569194 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.744684 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: E0127 13:23:26.568839 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.744893 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.745252 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.745454 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.745819 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.746065 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.746417 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.746624 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.746985 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
I0127 13:24:14.747000 1391899 logs.go:123] Gathering logs for coredns [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579] ...
I0127 13:24:14.747029 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
I0127 13:24:14.802969 1391899 logs.go:123] Gathering logs for kube-proxy [2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6] ...
I0127 13:24:14.802997 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
I0127 13:24:14.885354 1391899 logs.go:123] Gathering logs for kube-proxy [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676] ...
I0127 13:24:14.885377 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
I0127 13:24:14.943705 1391899 logs.go:123] Gathering logs for kube-controller-manager [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6] ...
I0127 13:24:14.943731 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
I0127 13:24:15.004077 1391899 logs.go:123] Gathering logs for storage-provisioner [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9] ...
I0127 13:24:15.004163 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
I0127 13:24:15.066986 1391899 logs.go:123] Gathering logs for kubernetes-dashboard [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1] ...
I0127 13:24:15.067101 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
I0127 13:24:15.157150 1391899 logs.go:123] Gathering logs for etcd [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f] ...
I0127 13:24:15.157183 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
I0127 13:24:15.234051 1391899 logs.go:123] Gathering logs for etcd [f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5] ...
I0127 13:24:15.234091 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
I0127 13:24:15.331724 1391899 logs.go:123] Gathering logs for kube-scheduler [4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc] ...
I0127 13:24:15.331918 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
I0127 13:24:15.411973 1391899 logs.go:123] Gathering logs for containerd ...
I0127 13:24:15.412006 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0127 13:24:15.508734 1391899 logs.go:123] Gathering logs for container status ...
I0127 13:24:15.508770 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0127 13:24:15.593697 1391899 logs.go:123] Gathering logs for kube-scheduler [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53] ...
I0127 13:24:15.593769 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
I0127 13:24:15.652854 1391899 logs.go:123] Gathering logs for kube-controller-manager [fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13] ...
I0127 13:24:15.652938 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
I0127 13:24:15.783362 1391899 logs.go:123] Gathering logs for kindnet [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7] ...
I0127 13:24:15.783455 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
I0127 13:24:15.840977 1391899 out.go:358] Setting ErrFile to fd 2...
I0127 13:24:15.841062 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0127 13:24:15.841144 1391899 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0127 13:24:15.841186 1391899 out.go:270] Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:15.841217 1391899 out.go:270] Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:15.841278 1391899 out.go:270] Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:15.841310 1391899 out.go:270] Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:15.841339 1391899 out.go:270] Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
I0127 13:24:15.841385 1391899 out.go:358] Setting ErrFile to fd 2...
I0127 13:24:15.841414 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:24:25.842651 1391899 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0127 13:24:25.852586 1391899 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0127 13:24:25.855881 1391899 out.go:201]
W0127 13:24:25.858640 1391899 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0127 13:24:25.858717 1391899 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0127 13:24:25.858737 1391899 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0127 13:24:25.858743 1391899 out.go:270] *
*
W0127 13:24:25.860146 1391899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0127 13:24:25.861945 1391899 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-813213 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-813213
helpers_test.go:235: (dbg) docker inspect old-k8s-version-813213:
-- stdout --
[
{
"Id": "01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b",
"Created": "2025-01-27T13:15:07.409447941Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1392093,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-01-27T13:18:14.240947954Z",
"FinishedAt": "2025-01-27T13:18:12.850680659Z"
},
"Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
"ResolvConfPath": "/var/lib/docker/containers/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b/hostname",
"HostsPath": "/var/lib/docker/containers/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b/hosts",
"LogPath": "/var/lib/docker/containers/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b/01d0bc6920ab333099edf1003276a07c986ab78ad4863f8c4206becdeb1ce19b-json.log",
"Name": "/old-k8s-version-813213",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"old-k8s-version-813213:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-813213",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/016debb10dacf6bae7dfea2f47ccd39925ae9e9855d17209bb2fce2a397f34b2-init/diff:/var/lib/docker/overlay2/040f98a182d1ab4d08a5b3f3ff6e1a3c8ab5a734c543c8ed242541f9c435fd6a/diff",
"MergedDir": "/var/lib/docker/overlay2/016debb10dacf6bae7dfea2f47ccd39925ae9e9855d17209bb2fce2a397f34b2/merged",
"UpperDir": "/var/lib/docker/overlay2/016debb10dacf6bae7dfea2f47ccd39925ae9e9855d17209bb2fce2a397f34b2/diff",
"WorkDir": "/var/lib/docker/overlay2/016debb10dacf6bae7dfea2f47ccd39925ae9e9855d17209bb2fce2a397f34b2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "old-k8s-version-813213",
"Source": "/var/lib/docker/volumes/old-k8s-version-813213/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "old-k8s-version-813213",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-813213",
"name.minikube.sigs.k8s.io": "old-k8s-version-813213",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "b109f4da99beb548df2edf2aced2b50638a8e97c3385dc4ba1a13d90541d7a53",
"SandboxKey": "/var/run/docker/netns/b109f4da99be",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34227"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34228"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34231"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34229"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34230"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-813213": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "7a4039aaf07ee86cf251afb61766a26e4e33a7d3cffa9eb8f0bfae29a1c2990f",
"EndpointID": "c7959dd6a08541060b1cc125516e4b85be2b677abe057c2f27964c9c1544149a",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-813213",
"01d0bc6920ab"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-813213 -n old-k8s-version-813213
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-813213 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-813213 logs -n 25: (2.52189905s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-135138 | cert-expiration-135138 | jenkins | v1.35.0 | 27 Jan 25 13:13 UTC | 27 Jan 25 13:14 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-852325 | force-systemd-env-852325 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-852325 | force-systemd-env-852325 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
| start | -p cert-options-511343 | cert-options-511343 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-511343 ssh | cert-options-511343 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-511343 -- sudo | cert-options-511343 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-511343 | cert-options-511343 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:14 UTC |
| start | -p old-k8s-version-813213 | old-k8s-version-813213 | jenkins | v1.35.0 | 27 Jan 25 13:14 UTC | 27 Jan 25 13:17 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-135138 | cert-expiration-135138 | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-135138 | cert-expiration-135138 | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
| start | -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable metrics-server -p old-k8s-version-813213 | old-k8s-version-813213 | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-813213 | old-k8s-version-813213 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-813213 | old-k8s-version-813213 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-813213 | old-k8s-version-813213 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:19 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC | 27 Jan 25 13:19 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC | 27 Jan 25 13:23 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| image | no-preload-181914 image list | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
| delete | -p no-preload-181914 | no-preload-181914 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
| start | -p embed-certs-434512 | embed-certs-434512 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/27 13:24:13
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.23.4 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 13:24:13.918521 1402708 out.go:345] Setting OutFile to fd 1 ...
I0127 13:24:13.918726 1402708 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:24:13.918754 1402708 out.go:358] Setting ErrFile to fd 2...
I0127 13:24:13.918774 1402708 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:24:13.919073 1402708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-1181389/.minikube/bin
I0127 13:24:13.919814 1402708 out.go:352] Setting JSON to false
I0127 13:24:13.921438 1402708 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21999,"bootTime":1737962255,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I0127 13:24:13.921543 1402708 start.go:139] virtualization:
I0127 13:24:13.925324 1402708 out.go:177] * [embed-certs-434512] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0127 13:24:13.928489 1402708 notify.go:220] Checking for updates...
I0127 13:24:13.932245 1402708 out.go:177] - MINIKUBE_LOCATION=20317
I0127 13:24:13.934964 1402708 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 13:24:13.937552 1402708 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20317-1181389/kubeconfig
I0127 13:24:13.940293 1402708 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-1181389/.minikube
I0127 13:24:13.942996 1402708 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0127 13:24:13.945710 1402708 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 13:24:13.949156 1402708 config.go:182] Loaded profile config "old-k8s-version-813213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0127 13:24:13.949293 1402708 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 13:24:13.987890 1402708 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0127 13:24:13.988000 1402708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 13:24:14.101532 1402708 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 13:24:14.085872952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0127 13:24:14.101672 1402708 docker.go:318] overlay module found
I0127 13:24:14.106440 1402708 out.go:177] * Using the docker driver based on user configuration
I0127 13:24:14.108996 1402708 start.go:297] selected driver: docker
I0127 13:24:14.109014 1402708 start.go:901] validating driver "docker" against <nil>
I0127 13:24:14.109069 1402708 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 13:24:14.109952 1402708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 13:24:14.222233 1402708 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 13:24:14.210058419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0127 13:24:14.222462 1402708 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0127 13:24:14.222752 1402708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 13:24:14.225528 1402708 out.go:177] * Using Docker driver with root privileges
I0127 13:24:14.228158 1402708 cni.go:84] Creating CNI manager for ""
I0127 13:24:14.228232 1402708 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0127 13:24:14.228244 1402708 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0127 13:24:14.228329 1402708 start.go:340] cluster config:
{Name:embed-certs-434512 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-434512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:24:14.231216 1402708 out.go:177] * Starting "embed-certs-434512" primary control-plane node in "embed-certs-434512" cluster
I0127 13:24:14.233857 1402708 cache.go:121] Beginning downloading kic base image for docker with containerd
I0127 13:24:14.236575 1402708 out.go:177] * Pulling base image v0.0.46 ...
I0127 13:24:14.239208 1402708 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 13:24:14.239262 1402708 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
I0127 13:24:14.239270 1402708 cache.go:56] Caching tarball of preloaded images
I0127 13:24:14.239328 1402708 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0127 13:24:14.239598 1402708 preload.go:172] Found /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0127 13:24:14.239613 1402708 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0127 13:24:14.239717 1402708 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/embed-certs-434512/config.json ...
I0127 13:24:14.239733 1402708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-1181389/.minikube/profiles/embed-certs-434512/config.json: {Name:mk7721c0da76923e66fe0d486f38160c27950491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:24:14.262893 1402708 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0127 13:24:14.262912 1402708 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0127 13:24:14.262924 1402708 cache.go:227] Successfully downloaded all kic artifacts
I0127 13:24:14.262946 1402708 start.go:360] acquireMachinesLock for embed-certs-434512: {Name:mk2586b0657c09793a36438ce1b60de336afbd2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:24:14.263057 1402708 start.go:364] duration metric: took 95.718µs to acquireMachinesLock for "embed-certs-434512"
I0127 13:24:14.263082 1402708 start.go:93] Provisioning new machine with config: &{Name:embed-certs-434512 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-434512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 13:24:14.263153 1402708 start.go:125] createHost starting for "" (driver="docker")
I0127 13:24:13.636050 1391899 cri.go:89] found id: "6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
I0127 13:24:13.636069 1391899 cri.go:89] found id: "9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
I0127 13:24:13.636074 1391899 cri.go:89] found id: ""
I0127 13:24:13.636081 1391899 logs.go:282] 2 containers: [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c]
I0127 13:24:13.636140 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.641250 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.645845 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0127 13:24:13.645910 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0127 13:24:13.730109 1391899 cri.go:89] found id: "498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
I0127 13:24:13.730126 1391899 cri.go:89] found id: "4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
I0127 13:24:13.730131 1391899 cri.go:89] found id: ""
I0127 13:24:13.730138 1391899 logs.go:282] 2 containers: [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc]
I0127 13:24:13.730188 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.735061 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.739961 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0127 13:24:13.740030 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0127 13:24:13.793549 1391899 cri.go:89] found id: "53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
I0127 13:24:13.793568 1391899 cri.go:89] found id: "2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
I0127 13:24:13.793573 1391899 cri.go:89] found id: ""
I0127 13:24:13.793580 1391899 logs.go:282] 2 containers: [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6]
I0127 13:24:13.793635 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.798974 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.803128 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0127 13:24:13.803199 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0127 13:24:13.865547 1391899 cri.go:89] found id: "348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
I0127 13:24:13.865586 1391899 cri.go:89] found id: "fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
I0127 13:24:13.865591 1391899 cri.go:89] found id: ""
I0127 13:24:13.865597 1391899 logs.go:282] 2 containers: [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13]
I0127 13:24:13.865654 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.869602 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.873071 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0127 13:24:13.873189 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0127 13:24:13.920522 1391899 cri.go:89] found id: "98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
I0127 13:24:13.920541 1391899 cri.go:89] found id: "8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
I0127 13:24:13.920546 1391899 cri.go:89] found id: ""
I0127 13:24:13.920553 1391899 logs.go:282] 2 containers: [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7]
I0127 13:24:13.920606 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.924728 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.928717 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0127 13:24:13.928777 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0127 13:24:13.981265 1391899 cri.go:89] found id: "ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
I0127 13:24:13.981289 1391899 cri.go:89] found id: "eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
I0127 13:24:13.981294 1391899 cri.go:89] found id: ""
I0127 13:24:13.981300 1391899 logs.go:282] 2 containers: [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c]
I0127 13:24:13.981386 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.985260 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:13.988991 1391899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0127 13:24:13.989131 1391899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0127 13:24:14.041772 1391899 cri.go:89] found id: "84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
I0127 13:24:14.041793 1391899 cri.go:89] found id: ""
I0127 13:24:14.041801 1391899 logs.go:282] 1 containers: [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1]
I0127 13:24:14.041860 1391899 ssh_runner.go:195] Run: which crictl
I0127 13:24:14.045758 1391899 logs.go:123] Gathering logs for kube-apiserver [dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba] ...
I0127 13:24:14.045783 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba"
I0127 13:24:14.119271 1391899 logs.go:123] Gathering logs for coredns [9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c] ...
I0127 13:24:14.119329 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c"
I0127 13:24:14.184713 1391899 logs.go:123] Gathering logs for dmesg ...
I0127 13:24:14.184744 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0127 13:24:14.205804 1391899 logs.go:123] Gathering logs for describe nodes ...
I0127 13:24:14.205839 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0127 13:24:14.425188 1391899 logs.go:123] Gathering logs for kube-apiserver [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7] ...
I0127 13:24:14.425269 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7"
I0127 13:24:14.513059 1391899 logs.go:123] Gathering logs for kindnet [8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7] ...
I0127 13:24:14.513133 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7"
I0127 13:24:14.569064 1391899 logs.go:123] Gathering logs for storage-provisioner [eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c] ...
I0127 13:24:14.569092 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c"
I0127 13:24:14.641486 1391899 logs.go:123] Gathering logs for kubelet ...
I0127 13:24:14.641555 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0127 13:24:14.715359 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.161443 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.716229 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:40 old-k8s-version-813213 kubelet[662]: E0127 13:18:40.742912 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.719582 1391899 logs.go:138] Found kubelet problem: Jan 27 13:18:52 old-k8s-version-813213 kubelet[662]: E0127 13:18:52.582413 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.721888 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:00 old-k8s-version-813213 kubelet[662]: E0127 13:19:00.834386 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.722259 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:01 old-k8s-version-813213 kubelet[662]: E0127 13:19:01.844499 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.722476 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:04 old-k8s-version-813213 kubelet[662]: E0127 13:19:04.569022 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.722844 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:08 old-k8s-version-813213 kubelet[662]: E0127 13:19:08.040193 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.723653 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:10 old-k8s-version-813213 kubelet[662]: E0127 13:19:10.866611 662 pod_workers.go:191] Error syncing pod b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5 ("storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3ee3aee-1b8f-4040-9cbf-f87cb41abfd5)"
W0127 13:24:14.726364 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:16 old-k8s-version-813213 kubelet[662]: E0127 13:19:16.578183 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.727343 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:21 old-k8s-version-813213 kubelet[662]: E0127 13:19:21.914690 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.727830 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:28 old-k8s-version-813213 kubelet[662]: E0127 13:19:28.040605 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.728048 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:31 old-k8s-version-813213 kubelet[662]: E0127 13:19:31.569400 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.728403 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:40 old-k8s-version-813213 kubelet[662]: E0127 13:19:40.569017 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.728614 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:46 old-k8s-version-813213 kubelet[662]: E0127 13:19:46.569235 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.729266 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:54 old-k8s-version-813213 kubelet[662]: E0127 13:19:54.998749 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.729680 1391899 logs.go:138] Found kubelet problem: Jan 27 13:19:58 old-k8s-version-813213 kubelet[662]: E0127 13:19:58.040157 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.732147 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:00 old-k8s-version-813213 kubelet[662]: E0127 13:20:00.591029 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.732514 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:08 old-k8s-version-813213 kubelet[662]: E0127 13:20:08.568750 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.732722 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:14 old-k8s-version-813213 kubelet[662]: E0127 13:20:14.569356 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.733086 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:20 old-k8s-version-813213 kubelet[662]: E0127 13:20:20.568757 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.733291 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:25 old-k8s-version-813213 kubelet[662]: E0127 13:20:25.569626 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.733665 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:33 old-k8s-version-813213 kubelet[662]: E0127 13:20:33.569406 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.733881 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:37 old-k8s-version-813213 kubelet[662]: E0127 13:20:37.569368 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.734571 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:45 old-k8s-version-813213 kubelet[662]: E0127 13:20:45.161804 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.734937 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:48 old-k8s-version-813213 kubelet[662]: E0127 13:20:48.040150 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.735156 1391899 logs.go:138] Found kubelet problem: Jan 27 13:20:51 old-k8s-version-813213 kubelet[662]: E0127 13:20:51.569490 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.735527 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:02 old-k8s-version-813213 kubelet[662]: E0127 13:21:02.568801 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.735733 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:04 old-k8s-version-813213 kubelet[662]: E0127 13:21:04.569222 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.735945 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:15 old-k8s-version-813213 kubelet[662]: E0127 13:21:15.569338 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.736307 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:16 old-k8s-version-813213 kubelet[662]: E0127 13:21:16.568781 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.736659 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.569462 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.739220 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:30 old-k8s-version-813213 kubelet[662]: E0127 13:21:30.578384 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0127 13:24:14.739590 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:41 old-k8s-version-813213 kubelet[662]: E0127 13:21:41.569455 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.739807 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:42 old-k8s-version-813213 kubelet[662]: E0127 13:21:42.569422 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.740162 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:52 old-k8s-version-813213 kubelet[662]: E0127 13:21:52.568642 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.740367 1391899 logs.go:138] Found kubelet problem: Jan 27 13:21:57 old-k8s-version-813213 kubelet[662]: E0127 13:21:57.570339 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.740713 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:03 old-k8s-version-813213 kubelet[662]: E0127 13:22:03.569863 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.740923 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:12 old-k8s-version-813213 kubelet[662]: E0127 13:22:12.569369 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.741548 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:15 old-k8s-version-813213 kubelet[662]: E0127 13:22:15.386979 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.741905 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:18 old-k8s-version-813213 kubelet[662]: E0127 13:22:18.040158 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.742108 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:26 old-k8s-version-813213 kubelet[662]: E0127 13:22:26.569341 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.742460 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:30 old-k8s-version-813213 kubelet[662]: E0127 13:22:30.568782 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.742682 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:37 old-k8s-version-813213 kubelet[662]: E0127 13:22:37.572662 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.743095 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:41 old-k8s-version-813213 kubelet[662]: E0127 13:22:41.568879 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.743281 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:49 old-k8s-version-813213 kubelet[662]: E0127 13:22:49.569740 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.743626 1391899 logs.go:138] Found kubelet problem: Jan 27 13:22:56 old-k8s-version-813213 kubelet[662]: E0127 13:22:56.568752 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.743813 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:03 old-k8s-version-813213 kubelet[662]: E0127 13:23:03.569135 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.744134 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: E0127 13:23:11.569770 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.744341 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:16 old-k8s-version-813213 kubelet[662]: E0127 13:23:16.569194 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.744684 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: E0127 13:23:26.568839 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.744893 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.745252 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.745454 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.745819 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.746065 1391899 logs.go:138] Found kubelet problem: Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.746417 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:14.746624 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:14.746985 1391899 logs.go:138] Found kubelet problem: Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
I0127 13:24:14.747000 1391899 logs.go:123] Gathering logs for coredns [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579] ...
I0127 13:24:14.747029 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579"
I0127 13:24:14.802969 1391899 logs.go:123] Gathering logs for kube-proxy [2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6] ...
I0127 13:24:14.802997 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6"
I0127 13:24:14.885354 1391899 logs.go:123] Gathering logs for kube-proxy [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676] ...
I0127 13:24:14.885377 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676"
I0127 13:24:14.943705 1391899 logs.go:123] Gathering logs for kube-controller-manager [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6] ...
I0127 13:24:14.943731 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6"
I0127 13:24:15.004077 1391899 logs.go:123] Gathering logs for storage-provisioner [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9] ...
I0127 13:24:15.004163 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9"
I0127 13:24:15.066986 1391899 logs.go:123] Gathering logs for kubernetes-dashboard [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1] ...
I0127 13:24:15.067101 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1"
I0127 13:24:15.157150 1391899 logs.go:123] Gathering logs for etcd [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f] ...
I0127 13:24:15.157183 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f"
I0127 13:24:15.234051 1391899 logs.go:123] Gathering logs for etcd [f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5] ...
I0127 13:24:15.234091 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5"
I0127 13:24:15.331724 1391899 logs.go:123] Gathering logs for kube-scheduler [4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc] ...
I0127 13:24:15.331918 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc"
I0127 13:24:15.411973 1391899 logs.go:123] Gathering logs for containerd ...
I0127 13:24:15.412006 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0127 13:24:15.508734 1391899 logs.go:123] Gathering logs for container status ...
I0127 13:24:15.508770 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0127 13:24:15.593697 1391899 logs.go:123] Gathering logs for kube-scheduler [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53] ...
I0127 13:24:15.593769 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53"
I0127 13:24:15.652854 1391899 logs.go:123] Gathering logs for kube-controller-manager [fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13] ...
I0127 13:24:15.652938 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13"
I0127 13:24:15.783362 1391899 logs.go:123] Gathering logs for kindnet [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7] ...
I0127 13:24:15.783455 1391899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7"
I0127 13:24:15.840977 1391899 out.go:358] Setting ErrFile to fd 2...
I0127 13:24:15.841062 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0127 13:24:15.841144 1391899 out.go:270] X Problems detected in kubelet:
W0127 13:24:15.841186 1391899 out.go:270] Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:15.841217 1391899 out.go:270] Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:15.841278 1391899 out.go:270] Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
W0127 13:24:15.841310 1391899 out.go:270] Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0127 13:24:15.841339 1391899 out.go:270] Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
I0127 13:24:15.841385 1391899 out.go:358] Setting ErrFile to fd 2...
I0127 13:24:15.841414 1391899 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:24:14.266605 1402708 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0127 13:24:14.266852 1402708 start.go:159] libmachine.API.Create for "embed-certs-434512" (driver="docker")
I0127 13:24:14.266877 1402708 client.go:168] LocalClient.Create starting
I0127 13:24:14.266933 1402708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/ca.pem
I0127 13:24:14.266963 1402708 main.go:141] libmachine: Decoding PEM data...
I0127 13:24:14.266977 1402708 main.go:141] libmachine: Parsing certificate...
I0127 13:24:14.267037 1402708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-1181389/.minikube/certs/cert.pem
I0127 13:24:14.267059 1402708 main.go:141] libmachine: Decoding PEM data...
I0127 13:24:14.267068 1402708 main.go:141] libmachine: Parsing certificate...
I0127 13:24:14.267420 1402708 cli_runner.go:164] Run: docker network inspect embed-certs-434512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 13:24:14.287550 1402708 cli_runner.go:211] docker network inspect embed-certs-434512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 13:24:14.287636 1402708 network_create.go:284] running [docker network inspect embed-certs-434512] to gather additional debugging logs...
I0127 13:24:14.287658 1402708 cli_runner.go:164] Run: docker network inspect embed-certs-434512
W0127 13:24:14.319135 1402708 cli_runner.go:211] docker network inspect embed-certs-434512 returned with exit code 1
I0127 13:24:14.319169 1402708 network_create.go:287] error running [docker network inspect embed-certs-434512]: docker network inspect embed-certs-434512: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-434512 not found
I0127 13:24:14.319182 1402708 network_create.go:289] output of [docker network inspect embed-certs-434512]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-434512 not found
** /stderr **
I0127 13:24:14.319324 1402708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 13:24:14.344907 1402708 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f9fe3033877 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e9:d1:42:e8} reservation:<nil>}
I0127 13:24:14.345331 1402708 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-44e0458e836e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9e:9c:84:ef} reservation:<nil>}
I0127 13:24:14.345675 1402708 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4f5264b447e0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:0f:c7:a8:12} reservation:<nil>}
I0127 13:24:14.346085 1402708 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7a4039aaf07e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:f5:2f:95:ee} reservation:<nil>}
I0127 13:24:14.346554 1402708 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019aec00}
I0127 13:24:14.346575 1402708 network_create.go:124] attempt to create docker network embed-certs-434512 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0127 13:24:14.346635 1402708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-434512 embed-certs-434512
I0127 13:24:14.450744 1402708 network_create.go:108] docker network embed-certs-434512 192.168.85.0/24 created
I0127 13:24:14.450773 1402708 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-434512" container
I0127 13:24:14.450851 1402708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0127 13:24:14.471422 1402708 cli_runner.go:164] Run: docker volume create embed-certs-434512 --label name.minikube.sigs.k8s.io=embed-certs-434512 --label created_by.minikube.sigs.k8s.io=true
I0127 13:24:14.516120 1402708 oci.go:103] Successfully created a docker volume embed-certs-434512
I0127 13:24:14.516201 1402708 cli_runner.go:164] Run: docker run --rm --name embed-certs-434512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-434512 --entrypoint /usr/bin/test -v embed-certs-434512:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
I0127 13:24:15.347955 1402708 oci.go:107] Successfully prepared a docker volume embed-certs-434512
I0127 13:24:15.348004 1402708 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 13:24:15.348025 1402708 kic.go:194] Starting extracting preloaded images to volume ...
I0127 13:24:15.348097 1402708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-434512:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
I0127 13:24:20.103137 1402708 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-1181389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-434512:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.754994458s)
I0127 13:24:20.103172 1402708 kic.go:203] duration metric: took 4.755142909s to extract preloaded images to volume ...
W0127 13:24:20.103329 1402708 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0127 13:24:20.103451 1402708 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0127 13:24:20.160922 1402708 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-434512 --name embed-certs-434512 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-434512 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-434512 --network embed-certs-434512 --ip 192.168.85.2 --volume embed-certs-434512:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
I0127 13:24:20.527593 1402708 cli_runner.go:164] Run: docker container inspect embed-certs-434512 --format={{.State.Running}}
I0127 13:24:20.549013 1402708 cli_runner.go:164] Run: docker container inspect embed-certs-434512 --format={{.State.Status}}
I0127 13:24:20.570152 1402708 cli_runner.go:164] Run: docker exec embed-certs-434512 stat /var/lib/dpkg/alternatives/iptables
I0127 13:24:20.628650 1402708 oci.go:144] the created container "embed-certs-434512" has a running status.
I0127 13:24:20.628678 1402708 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/embed-certs-434512/id_rsa...
I0127 13:24:21.028929 1402708 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20317-1181389/.minikube/machines/embed-certs-434512/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0127 13:24:21.053271 1402708 cli_runner.go:164] Run: docker container inspect embed-certs-434512 --format={{.State.Status}}
I0127 13:24:21.079154 1402708 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0127 13:24:21.079184 1402708 kic_runner.go:114] Args: [docker exec --privileged embed-certs-434512 chown docker:docker /home/docker/.ssh/authorized_keys]
I0127 13:24:21.157729 1402708 cli_runner.go:164] Run: docker container inspect embed-certs-434512 --format={{.State.Status}}
I0127 13:24:21.180889 1402708 machine.go:93] provisionDockerMachine start ...
I0127 13:24:21.180995 1402708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-434512
I0127 13:24:21.210101 1402708 main.go:141] libmachine: Using SSH client type: native
I0127 13:24:21.210377 1402708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 34237 <nil> <nil>}
I0127 13:24:21.210387 1402708 main.go:141] libmachine: About to run SSH command:
hostname
I0127 13:24:21.211088 1402708 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0127 13:24:25.842651 1391899 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0127 13:24:25.852586 1391899 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0127 13:24:25.855881 1391899 out.go:201]
W0127 13:24:25.858640 1391899 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0127 13:24:25.858717 1391899 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0127 13:24:25.858737 1391899 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0127 13:24:25.858743 1391899 out.go:270] *
W0127 13:24:25.860146 1391899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0127 13:24:25.861945 1391899 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
4764648b74cc8 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 11f3e4e3069a4 dashboard-metrics-scraper-8d5bb5db8-s2b59
ffc35dde525c7 ba04bb24b9575 5 minutes ago Running storage-provisioner 3 d49101632db92 storage-provisioner
84b4623c8ca9c 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 4df906feafd9d kubernetes-dashboard-cd95d586-r9xkk
6987b703de853 db91994f4ee8f 5 minutes ago Running coredns 1 8a7f803e96064 coredns-74ff55c5b-2phj4
26f1a91abb824 1611cd07b61d5 5 minutes ago Running busybox 1 6ed167208dd61 busybox
eb3408e253648 ba04bb24b9575 5 minutes ago Exited storage-provisioner 2 d49101632db92 storage-provisioner
53392608d921e 25a5233254979 5 minutes ago Running kube-proxy 1 89bf7b49b48f8 kube-proxy-8gl5q
98357c86477c2 2be0bcf609c65 5 minutes ago Running kindnet-cni 1 a4e1d9b1a0f29 kindnet-h8gtn
498c359719101 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 a66ab0ccf5e56 kube-scheduler-old-k8s-version-813213
348496f0d58f2 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 d8ae462cc2a23 kube-controller-manager-old-k8s-version-813213
9dc682ca643e0 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 d1e02c0b1c2c6 kube-apiserver-old-k8s-version-813213
207271fc8e8b7 05b738aa1bc63 5 minutes ago Running etcd 1 11ee657758719 etcd-old-k8s-version-813213
b39242a2da416 1611cd07b61d5 6 minutes ago Exited busybox 0 efc1fe61b1189 busybox
9977eebb81cee db91994f4ee8f 7 minutes ago Exited coredns 0 4cfe860a5526d coredns-74ff55c5b-2phj4
8f9ba8ca38617 2be0bcf609c65 8 minutes ago Exited kindnet-cni 0 ebe6dd711af48 kindnet-h8gtn
2cbee0b466a6c 25a5233254979 8 minutes ago Exited kube-proxy 0 8c41273626fc9 kube-proxy-8gl5q
4b2832f8237f0 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 98b5263e0b32a kube-scheduler-old-k8s-version-813213
fca0636811bd6 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 6e91b2824b110 kube-controller-manager-old-k8s-version-813213
dbf3fe12cc514 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 9c1ff998f5514 kube-apiserver-old-k8s-version-813213
f5b735d4310b3 05b738aa1bc63 8 minutes ago Exited etcd 0 3dd826d940bf8 etcd-old-k8s-version-813213
==> containerd <==
Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.690258161Z" level=info msg="StartContainer for \"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\" returns successfully"
Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.703375837Z" level=info msg="received exit event container_id:\"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\" id:\"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\" pid:3140 exit_status:255 exited_at:{seconds:1737984044 nanos:703136754}"
Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.725554065Z" level=info msg="shim disconnected" id=5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d namespace=k8s.io
Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.725862308Z" level=warning msg="cleaning up after shim disconnected" id=5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d namespace=k8s.io
Jan 27 13:20:44 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:44.725957025Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 13:20:45 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:45.167445954Z" level=info msg="RemoveContainer for \"1a73064466ad43ec312674bb19a55af972382c017099dff07f77b42f1ad2eb42\""
Jan 27 13:20:45 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:20:45.174753839Z" level=info msg="RemoveContainer for \"1a73064466ad43ec312674bb19a55af972382c017099dff07f77b42f1ad2eb42\" returns successfully"
Jan 27 13:21:30 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:21:30.570010381Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:21:30 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:21:30.575813494Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Jan 27 13:21:30 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:21:30.577791821Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jan 27 13:21:30 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:21:30.577827824Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.571272416Z" level=info msg="CreateContainer within sandbox \"11f3e4e3069a466ecdd7f4dbfcddf60d9fe8ad56cd24ed147cc6bbdaec30b31c\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.589089480Z" level=info msg="CreateContainer within sandbox \"11f3e4e3069a466ecdd7f4dbfcddf60d9fe8ad56cd24ed147cc6bbdaec30b31c\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\""
Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.593093453Z" level=info msg="StartContainer for \"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\""
Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.669710665Z" level=info msg="StartContainer for \"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\" returns successfully"
Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.672955634Z" level=info msg="received exit event container_id:\"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\" id:\"4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0\" pid:3389 exit_status:255 exited_at:{seconds:1737984134 nanos:672607065}"
Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.706930066Z" level=info msg="shim disconnected" id=4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0 namespace=k8s.io
Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.706989691Z" level=warning msg="cleaning up after shim disconnected" id=4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0 namespace=k8s.io
Jan 27 13:22:14 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:14.707000652Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 13:22:15 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:15.398697131Z" level=info msg="RemoveContainer for \"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\""
Jan 27 13:22:15 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:22:15.416658405Z" level=info msg="RemoveContainer for \"5a003e8153161f97e85ded01de4be26ff28056e62861500828c0e7b64d50233d\" returns successfully"
Jan 27 13:24:18 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:24:18.569544828Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:24:18 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:24:18.594620452Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Jan 27 13:24:18 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:24:18.596979274Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jan 27 13:24:18 old-k8s-version-813213 containerd[569]: time="2025-01-27T13:24:18.597011076Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [6987b703de853a384b6187fa686ac707375e62efa22618ca57d58b0628846579] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:39803 - 35571 "HINFO IN 3950191951765923051.2785443437838335599. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012433008s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0127 13:19:10.625702 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 13:18:40.625079002 +0000 UTC m=+0.029752787) (total time: 30.000523026s):
Trace[2019727887]: [30.000523026s] [30.000523026s] END
E0127 13:19:10.625737 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0127 13:19:10.626646 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 13:18:40.626338177 +0000 UTC m=+0.031011962) (total time: 30.000281006s):
Trace[939984059]: [30.000281006s] [30.000281006s] END
E0127 13:19:10.626666 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0127 13:19:10.626985 1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 13:18:40.626318698 +0000 UTC m=+0.030992467) (total time: 30.000650613s):
Trace[1474941318]: [30.000650613s] [30.000650613s] END
E0127 13:19:10.626997 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [9977eebb81ceefd915fb8ae9adc54fd8743f8453af41100c841a3b59d1f3e72c] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:33496 - 27897 "HINFO IN 7318361875637959600.346720018903427912. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027240103s
==> describe nodes <==
Name: old-k8s-version-813213
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-813213
kubernetes.io/os=linux
minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
minikube.k8s.io/name=old-k8s-version-813213
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_27T13_15_47_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 27 Jan 2025 13:15:43 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-813213
AcquireTime: <unset>
RenewTime: Mon, 27 Jan 2025 13:24:20 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 27 Jan 2025 13:19:38 +0000 Mon, 27 Jan 2025 13:15:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 27 Jan 2025 13:19:38 +0000 Mon, 27 Jan 2025 13:15:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 27 Jan 2025 13:19:38 +0000 Mon, 27 Jan 2025 13:15:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 27 Jan 2025 13:19:38 +0000 Mon, 27 Jan 2025 13:16:04 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-813213
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 94fc26c7473b4334bb3a2d0d8ffd8ceb
System UUID: 9e7bbca2-cbb2-4a8e-b921-d413bc5671fa
Boot ID: 9a2b5a8b-82ce-43cf-92bd-6297263d30a0
Kernel Version: 5.15.0-1075-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.24
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m40s
kube-system coredns-74ff55c5b-2phj4 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m23s
kube-system etcd-old-k8s-version-813213 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m32s
kube-system kindnet-h8gtn 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m23s
kube-system kube-apiserver-old-k8s-version-813213 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m31s
kube-system kube-controller-manager-old-k8s-version-813213 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m32s
kube-system kube-proxy-8gl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m23s
kube-system kube-scheduler-old-k8s-version-813213 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m32s
kube-system metrics-server-9975d5f86-gkxmm 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m27s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m22s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-s2b59 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m32s
kubernetes-dashboard kubernetes-dashboard-cd95d586-r9xkk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m32s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m51s (x5 over 8m51s) kubelet Node old-k8s-version-813213 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m51s (x5 over 8m51s) kubelet Node old-k8s-version-813213 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m51s (x5 over 8m51s) kubelet Node old-k8s-version-813213 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m51s kubelet Updated Node Allocatable limit across pods
Normal Starting 8m32s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m32s kubelet Node old-k8s-version-813213 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m32s kubelet Node old-k8s-version-813213 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m32s kubelet Node old-k8s-version-813213 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m32s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m23s kubelet Node old-k8s-version-813213 status is now: NodeReady
Normal Starting 8m21s kube-proxy Starting kube-proxy.
Normal Starting 5m58s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m58s (x8 over 5m58s) kubelet Node old-k8s-version-813213 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m58s (x8 over 5m58s) kubelet Node old-k8s-version-813213 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m58s (x7 over 5m58s) kubelet Node old-k8s-version-813213 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m58s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m47s kube-proxy Starting kube-proxy.
==> dmesg <==
==> etcd [207271fc8e8b70d52e363f5f48188c01f5185e49f7efced711705901058b9f7f] <==
2025-01-27 13:20:24.389003 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:20:34.389073 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:20:44.389148 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:20:54.388943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:21:04.388977 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:21:14.389044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:21:24.389017 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:21:34.389054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:21:44.388976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:21:54.388883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:22:04.388929 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:22:14.389087 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:22:24.388996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:22:34.388967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:22:44.389238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:22:54.389019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:23:04.388888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:23:14.388984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:23:24.388935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:23:34.388854 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:23:44.389355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:23:54.388936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:24:04.388918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:24:14.389261 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:24:24.389888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [f5b735d4310b352e1f2a8c5e9bc269f963d80c5a3e1cc6a8016c1873e8e598a5] <==
raft2025/01/27 13:15:37 INFO: ea7e25599daad906 became leader at term 2
raft2025/01/27 13:15:37 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-01-27 13:15:37.341247 I | etcdserver: setting up the initial cluster version to 3.4
2025-01-27 13:15:37.341501 I | etcdserver: published {Name:old-k8s-version-813213 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-01-27 13:15:37.341587 I | embed: ready to serve client requests
2025-01-27 13:15:37.343280 I | embed: ready to serve client requests
2025-01-27 13:15:37.344653 I | embed: serving client requests on 192.168.76.2:2379
2025-01-27 13:15:37.345000 N | etcdserver/membership: set the initial cluster version to 3.4
2025-01-27 13:15:37.345668 I | embed: serving client requests on 127.0.0.1:2379
2025-01-27 13:15:37.351849 I | etcdserver/api: enabled capabilities for version 3.4
2025-01-27 13:15:46.035402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:15:58.876011 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:16:05.679445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:16:15.661312 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:16:25.659895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:16:35.659938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:16:45.659961 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:16:55.659919 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:17:05.660015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:17:15.659924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:17:25.659923 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:17:35.661498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:17:45.659964 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:17:55.659864 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-27 13:18:00.744644 W | etcdserver: read-only range request "key:\"/registry/replicasets/kube-system/metrics-server-9975d5f86\" " with result "range_response_count:1 size:3177" took too long (112.606733ms) to execute
==> kernel <==
13:24:28 up 6:06, 0 users, load average: 1.84, 1.90, 2.41
Linux old-k8s-version-813213 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [8f9ba8ca38617dd4c57c48f007e00144c96edc8377410ac33996e89c6c626ae7] <==
I0127 13:16:09.821116 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0127 13:16:09.821159 1 metrics.go:61] Registering metrics
I0127 13:16:09.821218 1 controller.go:401] Syncing nftables rules
I0127 13:16:19.636925 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:16:19.637021 1 main.go:301] handling current node
I0127 13:16:29.637231 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:16:29.637274 1 main.go:301] handling current node
I0127 13:16:39.634155 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:16:39.634202 1 main.go:301] handling current node
I0127 13:16:49.642776 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:16:49.642823 1 main.go:301] handling current node
I0127 13:16:59.633942 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:16:59.634133 1 main.go:301] handling current node
I0127 13:17:09.633931 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:17:09.633971 1 main.go:301] handling current node
I0127 13:17:19.636517 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:17:19.636646 1 main.go:301] handling current node
I0127 13:17:29.634970 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:17:29.635007 1 main.go:301] handling current node
I0127 13:17:39.641107 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:17:39.641141 1 main.go:301] handling current node
I0127 13:17:49.633966 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:17:49.634044 1 main.go:301] handling current node
I0127 13:17:59.637180 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:17:59.637381 1 main.go:301] handling current node
==> kindnet [98357c86477c2e61a73e06a97b5bda6b521689ebe81b9245a2adf7c79daceef7] <==
I0127 13:22:20.740023 1 main.go:301] handling current node
I0127 13:22:30.747933 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:22:30.747967 1 main.go:301] handling current node
I0127 13:22:40.739941 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:22:40.739976 1 main.go:301] handling current node
I0127 13:22:50.740553 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:22:50.740588 1 main.go:301] handling current node
I0127 13:23:00.747234 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:23:00.747270 1 main.go:301] handling current node
I0127 13:23:10.748055 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:23:10.748091 1 main.go:301] handling current node
I0127 13:23:20.745633 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:23:20.745669 1 main.go:301] handling current node
I0127 13:23:30.747015 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:23:30.747052 1 main.go:301] handling current node
I0127 13:23:40.739646 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:23:40.739681 1 main.go:301] handling current node
I0127 13:23:50.749078 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:23:50.749113 1 main.go:301] handling current node
I0127 13:24:00.740275 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:24:00.740313 1 main.go:301] handling current node
I0127 13:24:10.745857 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:24:10.745894 1 main.go:301] handling current node
I0127 13:24:20.745998 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0127 13:24:20.746108 1 main.go:301] handling current node
==> kube-apiserver [9dc682ca643e0a8340eec4be4ed13358824df730d5eace4f73db56ba726b7da7] <==
I0127 13:21:02.802170 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:21:02.802180 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0127 13:21:36.128873 1 client.go:360] parsed scheme: "passthrough"
I0127 13:21:36.128916 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:21:36.128926 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0127 13:21:41.200797 1 handler_proxy.go:102] no RequestInfo found in the context
E0127 13:21:41.201003 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0127 13:21:41.201021 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 13:22:15.903182 1 client.go:360] parsed scheme: "passthrough"
I0127 13:22:15.903245 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:22:15.903255 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0127 13:22:46.363168 1 client.go:360] parsed scheme: "passthrough"
I0127 13:22:46.363213 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:22:46.363222 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0127 13:23:16.586448 1 client.go:360] parsed scheme: "passthrough"
I0127 13:23:16.586494 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:23:16.586504 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0127 13:23:38.719689 1 handler_proxy.go:102] no RequestInfo found in the context
E0127 13:23:38.719892 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0127 13:23:38.719909 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 13:24:01.060064 1 client.go:360] parsed scheme: "passthrough"
I0127 13:24:01.060111 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:24:01.060270 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [dbf3fe12cc514cdb259c989c31b00ebdd7cf91640d7cf28f457a8a8ee0d4a0ba] <==
I0127 13:15:44.592367 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0127 13:15:44.592399 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0127 13:15:44.614684 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0127 13:15:44.619373 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0127 13:15:44.619400 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0127 13:15:45.176152 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0127 13:15:45.258905 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0127 13:15:45.337694 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0127 13:15:45.339180 1 controller.go:606] quota admission added evaluator for: endpoints
I0127 13:15:45.343954 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0127 13:15:46.372083 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0127 13:15:47.241016 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0127 13:15:47.304958 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0127 13:15:55.633487 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0127 13:16:04.311708 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0127 13:16:04.363424 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0127 13:16:09.811897 1 client.go:360] parsed scheme: "passthrough"
I0127 13:16:09.811935 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:16:09.811943 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0127 13:16:44.923182 1 client.go:360] parsed scheme: "passthrough"
I0127 13:16:44.923227 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:16:44.923236 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0127 13:17:20.419921 1 client.go:360] parsed scheme: "passthrough"
I0127 13:17:20.419968 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0127 13:17:20.419978 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [348496f0d58f24df8178fd9da80e0b658cd42c30cb16a5c7f5b4fed5e96bc2c6] <==
W0127 13:20:02.491698 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0127 13:20:26.392427 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0127 13:20:34.142248 1 request.go:655] Throttling request took 1.048311763s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0127 13:20:34.993998 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0127 13:20:56.894862 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0127 13:21:06.644543 1 request.go:655] Throttling request took 1.048449654s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v1?timeout=32s
W0127 13:21:07.496400 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0127 13:21:27.396901 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0127 13:21:39.146990 1 request.go:655] Throttling request took 1.048432953s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
W0127 13:21:39.998658 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0127 13:21:57.898738 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0127 13:22:11.648166 1 request.go:655] Throttling request took 1.048433278s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0127 13:22:12.499629 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0127 13:22:28.400714 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0127 13:22:44.150018 1 request.go:655] Throttling request took 1.04821663s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0127 13:22:45.011233 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0127 13:22:58.902511 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0127 13:23:16.663381 1 request.go:655] Throttling request took 1.048404408s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
W0127 13:23:17.514848 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0127 13:23:29.404430 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0127 13:23:49.165292 1 request.go:655] Throttling request took 1.048519089s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
W0127 13:23:50.017889 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0127 13:23:59.906325 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0127 13:24:21.668413 1 request.go:655] Throttling request took 1.04800496s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0127 13:24:22.519901 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [fca0636811bd616b25b61e58c05bade814ad56fe3e02bfa2b5cfa584cbec2b13] <==
I0127 13:16:04.338902 1 shared_informer.go:247] Caches are synced for taint
I0127 13:16:04.339077 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0127 13:16:04.339208 1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-813213. Assuming now as a timestamp.
I0127 13:16:04.339260 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal.
I0127 13:16:04.339485 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0127 13:16:04.339777 1 event.go:291] "Event occurred" object="old-k8s-version-813213" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-813213 event: Registered Node old-k8s-version-813213 in Controller"
I0127 13:16:04.342860 1 shared_informer.go:247] Caches are synced for resource quota
I0127 13:16:04.403120 1 shared_informer.go:247] Caches are synced for disruption
I0127 13:16:04.403233 1 disruption.go:339] Sending events to api server.
I0127 13:16:04.404360 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0127 13:16:04.407769 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8gl5q"
I0127 13:16:04.418603 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-h8gtn"
I0127 13:16:04.417890 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
E0127 13:16:04.454778 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0127 13:16:04.455376 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-dn9t8"
I0127 13:16:04.609682 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0127 13:16:04.610591 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2phj4"
E0127 13:16:04.611105 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"77adf47d-1e96-4003-80ff-c72f44ebaf58", ResourceVersion:"275", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63873580547, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f2ab80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f2ad00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4000f2ad40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40015da880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f2a
d60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f2ad80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f2ade0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000f500c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40010b86b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000876b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000eb30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40010b8738)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0127 13:16:04.787893 1 shared_informer.go:247] Caches are synced for garbage collector
I0127 13:16:04.787922 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0127 13:16:04.815799 1 shared_informer.go:247] Caches are synced for garbage collector
I0127 13:16:05.690504 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0127 13:16:05.701644 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-dn9t8"
I0127 13:17:59.336021 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0127 13:18:00.535364 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-gkxmm"
==> kube-proxy [2cbee0b466a6c9c956ae17e795d58e2af64abe7cc2a2822686196036bfcbe2e6] <==
I0127 13:16:06.800938 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0127 13:16:06.801058 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0127 13:16:06.822615 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0127 13:16:06.822906 1 server_others.go:185] Using iptables Proxier.
I0127 13:16:06.823265 1 server.go:650] Version: v1.20.0
I0127 13:16:06.824118 1 config.go:315] Starting service config controller
I0127 13:16:06.824262 1 shared_informer.go:240] Waiting for caches to sync for service config
I0127 13:16:06.824410 1 config.go:224] Starting endpoint slice config controller
I0127 13:16:06.824485 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0127 13:16:06.925236 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0127 13:16:06.925321 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [53392608d921e89b00a7a95975a674e420d59a11fbdfeb24bfb54ebe456ab676] <==
I0127 13:18:40.686811 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0127 13:18:40.686887 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0127 13:18:40.711820 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0127 13:18:40.711925 1 server_others.go:185] Using iptables Proxier.
I0127 13:18:40.712261 1 server.go:650] Version: v1.20.0
I0127 13:18:40.714008 1 config.go:315] Starting service config controller
I0127 13:18:40.718621 1 shared_informer.go:240] Waiting for caches to sync for service config
I0127 13:18:40.714803 1 config.go:224] Starting endpoint slice config controller
I0127 13:18:40.718681 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0127 13:18:40.818822 1 shared_informer.go:247] Caches are synced for service config
I0127 13:18:40.818822 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [498c359719101282d021d8fef3d5d3ebaf865f947cfaafe3e0ea022abbc87f53] <==
I0127 13:18:33.346910 1 serving.go:331] Generated self-signed cert in-memory
I0127 13:18:38.613577 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0127 13:18:38.613613 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0127 13:18:38.613663 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0127 13:18:38.613668 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0127 13:18:38.613689 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0127 13:18:38.613693 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0127 13:18:38.616306 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0127 13:18:38.616408 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0127 13:18:38.724910 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0127 13:18:38.725444 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0127 13:18:38.731430 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
==> kube-scheduler [4b2832f8237f04564577430293388c25c8df7894084ea4413f6ef816262c7bbc] <==
W0127 13:15:43.739321 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0127 13:15:43.741092 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0127 13:15:43.870584 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0127 13:15:43.870759 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0127 13:15:43.874977 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0127 13:15:43.875005 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0127 13:15:43.897553 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0127 13:15:43.897876 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0127 13:15:43.897967 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0127 13:15:43.898848 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0127 13:15:43.898962 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0127 13:15:43.898979 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0127 13:15:43.899051 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0127 13:15:43.899215 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0127 13:15:43.899282 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0127 13:15:43.899327 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 13:15:43.899365 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0127 13:15:43.921558 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0127 13:15:44.712198 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0127 13:15:44.824503 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0127 13:15:44.852101 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0127 13:15:44.857966 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0127 13:15:44.915577 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0127 13:15:45.156918 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0127 13:15:47.275146 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 27 13:22:56 old-k8s-version-813213 kubelet[662]: E0127 13:22:56.568752 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:23:03 old-k8s-version-813213 kubelet[662]: E0127 13:23:03.569135 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: I0127 13:23:11.569420 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
Jan 27 13:23:11 old-k8s-version-813213 kubelet[662]: E0127 13:23:11.569770 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:23:16 old-k8s-version-813213 kubelet[662]: E0127 13:23:16.569194 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: I0127 13:23:26.568393 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
Jan 27 13:23:26 old-k8s-version-813213 kubelet[662]: E0127 13:23:26.568839 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:23:28 old-k8s-version-813213 kubelet[662]: E0127 13:23:28.569744 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: I0127 13:23:37.568473 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
Jan 27 13:23:37 old-k8s-version-813213 kubelet[662]: E0127 13:23:37.568849 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:23:41 old-k8s-version-813213 kubelet[662]: E0127 13:23:41.573539 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: I0127 13:23:49.568827 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
Jan 27 13:23:49 old-k8s-version-813213 kubelet[662]: E0127 13:23:49.569246 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:23:54 old-k8s-version-813213 kubelet[662]: E0127 13:23:54.569362 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: I0127 13:24:03.568465 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
Jan 27 13:24:03 old-k8s-version-813213 kubelet[662]: E0127 13:24:03.568788 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:24:05 old-k8s-version-813213 kubelet[662]: E0127 13:24:05.569484 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: I0127 13:24:14.568558 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
Jan 27 13:24:14 old-k8s-version-813213 kubelet[662]: E0127 13:24:14.569118 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
Jan 27 13:24:18 old-k8s-version-813213 kubelet[662]: E0127 13:24:18.597378 662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jan 27 13:24:18 old-k8s-version-813213 kubelet[662]: E0127 13:24:18.597431 662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jan 27 13:24:18 old-k8s-version-813213 kubelet[662]: E0127 13:24:18.598868 662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-8rbgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-gkxmm_kube-system(00aea83
b-5c4a-48d5-b920-1fe2854717a0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jan 27 13:24:18 old-k8s-version-813213 kubelet[662]: E0127 13:24:18.598916 662 pod_workers.go:191] Error syncing pod 00aea83b-5c4a-48d5-b920-1fe2854717a0 ("metrics-server-9975d5f86-gkxmm_kube-system(00aea83b-5c4a-48d5-b920-1fe2854717a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jan 27 13:24:28 old-k8s-version-813213 kubelet[662]: I0127 13:24:28.568435 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4764648b74cc899aa13e2333e353bab858d41527b0eb96fb5851c7c92bb47ff0
Jan 27 13:24:28 old-k8s-version-813213 kubelet[662]: E0127 13:24:28.568770 662 pod_workers.go:191] Error syncing pod 504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b ("dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-s2b59_kubernetes-dashboard(504ddf65-c32f-47f9-b5f0-0ebb3a3ade4b)"
==> kubernetes-dashboard [84b4623c8ca9caacba41fa6535bc6f3dc8e0c52347d8ef68c993a81a7da957d1] <==
2025/01/27 13:19:03 Using namespace: kubernetes-dashboard
2025/01/27 13:19:03 Using in-cluster config to connect to apiserver
2025/01/27 13:19:03 Using secret token for csrf signing
2025/01/27 13:19:03 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/01/27 13:19:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/01/27 13:19:04 Successful initial request to the apiserver, version: v1.20.0
2025/01/27 13:19:04 Generating JWE encryption key
2025/01/27 13:19:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/01/27 13:19:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/01/27 13:19:04 Initializing JWE encryption key from synchronized object
2025/01/27 13:19:04 Creating in-cluster Sidecar client
2025/01/27 13:19:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:19:04 Serving insecurely on HTTP port: 9090
2025/01/27 13:19:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:20:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:20:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:21:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:21:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:22:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:22:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:23:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:23:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:24:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:19:03 Starting overwatch
==> storage-provisioner [eb3408e2536487de3f2f4d60740cac93d750307a3768323cc24df540e774eb8c] <==
I0127 13:18:40.611598 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0127 13:19:10.623925 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [ffc35dde525c7cc22c06eeef64a37735aeec2d5e82f29e5ae8ca6de1093abde9] <==
I0127 13:19:23.704351 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0127 13:19:23.736954 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0127 13:19:23.737187 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0127 13:19:41.202470 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0127 13:19:41.202677 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"727bf6f7-918a-47a1-abfe-871409cd83da", APIVersion:"v1", ResourceVersion:"857", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-813213_b97c09ba-f010-48db-a362-6cfb89fbb038 became leader
I0127 13:19:41.202993 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-813213_b97c09ba-f010-48db-a362-6cfb89fbb038!
I0127 13:19:41.303107 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-813213_b97c09ba-f010-48db-a362-6cfb89fbb038!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-813213 -n old-k8s-version-813213
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-813213 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-gkxmm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-813213 describe pod metrics-server-9975d5f86-gkxmm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-813213 describe pod metrics-server-9975d5f86-gkxmm: exit status 1 (137.10156ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-gkxmm" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-813213 describe pod metrics-server-9975d5f86-gkxmm: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (376.37s)