=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-618033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-618033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m14.521511705s)
-- stdout --
* [old-k8s-version-618033] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20151
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
* Using the docker driver based on existing profile
* Starting "old-k8s-version-618033" primary control-plane node in "old-k8s-version-618033" cluster
* Pulling base image v0.0.46 ...
* Restarting existing docker container for "old-k8s-version-618033" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-618033 addons enable metrics-server
* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
-- /stdout --
** stderr **
I0120 12:26:52.472735 663170 out.go:345] Setting OutFile to fd 1 ...
I0120 12:26:52.472884 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:26:52.472912 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:26:52.472931 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:26:52.473227 663170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
I0120 12:26:52.473730 663170 out.go:352] Setting JSON to false
I0120 12:26:52.474835 663170 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7758,"bootTime":1737368255,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0120 12:26:52.474911 663170 start.go:139] virtualization:
I0120 12:26:52.478171 663170 out.go:177] * [old-k8s-version-618033] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0120 12:26:52.482049 663170 out.go:177] - MINIKUBE_LOCATION=20151
I0120 12:26:52.482178 663170 notify.go:220] Checking for updates...
I0120 12:26:52.488295 663170 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 12:26:52.491245 663170 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
I0120 12:26:52.494196 663170 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
I0120 12:26:52.497157 663170 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0120 12:26:52.500153 663170 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 12:26:52.503748 663170 config.go:182] Loaded profile config "old-k8s-version-618033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 12:26:52.507369 663170 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
I0120 12:26:52.510244 663170 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 12:26:52.539043 663170 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
I0120 12:26:52.539177 663170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 12:26:52.599859 663170 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 12:26:52.589340967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 12:26:52.600056 663170 docker.go:318] overlay module found
I0120 12:26:52.603234 663170 out.go:177] * Using the docker driver based on existing profile
I0120 12:26:52.606992 663170 start.go:297] selected driver: docker
I0120 12:26:52.607020 663170 start.go:901] validating driver "docker" against &{Name:old-k8s-version-618033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:26:52.607246 663170 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 12:26:52.608136 663170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 12:26:52.670376 663170 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 12:26:52.661124274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 12:26:52.670787 663170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 12:26:52.670816 663170 cni.go:84] Creating CNI manager for ""
I0120 12:26:52.670857 663170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 12:26:52.670902 663170 start.go:340] cluster config:
{Name:old-k8s-version-618033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:26:52.675861 663170 out.go:177] * Starting "old-k8s-version-618033" primary control-plane node in "old-k8s-version-618033" cluster
I0120 12:26:52.678678 663170 cache.go:121] Beginning downloading kic base image for docker with containerd
I0120 12:26:52.681617 663170 out.go:177] * Pulling base image v0.0.46 ...
I0120 12:26:52.684463 663170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 12:26:52.684500 663170 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0120 12:26:52.684526 663170 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0120 12:26:52.684536 663170 cache.go:56] Caching tarball of preloaded images
I0120 12:26:52.684619 663170 preload.go:172] Found /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0120 12:26:52.684630 663170 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0120 12:26:52.684757 663170 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/config.json ...
I0120 12:26:52.705465 663170 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0120 12:26:52.705489 663170 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0120 12:26:52.705508 663170 cache.go:227] Successfully downloaded all kic artifacts
I0120 12:26:52.705540 663170 start.go:360] acquireMachinesLock for old-k8s-version-618033: {Name:mkb3e387ac9b6c1340636316ee22387c36aa6166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:26:52.705653 663170 start.go:364] duration metric: took 89.896µs to acquireMachinesLock for "old-k8s-version-618033"
I0120 12:26:52.705743 663170 start.go:96] Skipping create...Using existing machine configuration
I0120 12:26:52.705751 663170 fix.go:54] fixHost starting:
I0120 12:26:52.706226 663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
I0120 12:26:52.725106 663170 fix.go:112] recreateIfNeeded on old-k8s-version-618033: state=Stopped err=<nil>
W0120 12:26:52.725136 663170 fix.go:138] unexpected machine state, will restart: <nil>
I0120 12:26:52.728528 663170 out.go:177] * Restarting existing docker container for "old-k8s-version-618033" ...
I0120 12:26:52.731352 663170 cli_runner.go:164] Run: docker start old-k8s-version-618033
I0120 12:26:53.030735 663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
I0120 12:26:53.054739 663170 kic.go:430] container "old-k8s-version-618033" state is running.
I0120 12:26:53.055157 663170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-618033
I0120 12:26:53.078614 663170 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/config.json ...
I0120 12:26:53.078843 663170 machine.go:93] provisionDockerMachine start ...
I0120 12:26:53.078903 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:53.101652 663170 main.go:141] libmachine: Using SSH client type: native
I0120 12:26:53.101933 663170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33464 <nil> <nil>}
I0120 12:26:53.101943 663170 main.go:141] libmachine: About to run SSH command:
hostname
I0120 12:26:53.102913 663170 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0120 12:26:56.229030 663170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-618033
I0120 12:26:56.229099 663170 ubuntu.go:169] provisioning hostname "old-k8s-version-618033"
I0120 12:26:56.229173 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:56.247570 663170 main.go:141] libmachine: Using SSH client type: native
I0120 12:26:56.247834 663170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33464 <nil> <nil>}
I0120 12:26:56.247855 663170 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-618033 && echo "old-k8s-version-618033" | sudo tee /etc/hostname
I0120 12:26:56.382124 663170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-618033
I0120 12:26:56.382204 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:56.400193 663170 main.go:141] libmachine: Using SSH client type: native
I0120 12:26:56.400525 663170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33464 <nil> <nil>}
I0120 12:26:56.400638 663170 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-618033' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-618033/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-618033' | sudo tee -a /etc/hosts;
fi
fi
I0120 12:26:56.525972 663170 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 12:26:56.525998 663170 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20151-446459/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-446459/.minikube}
I0120 12:26:56.526027 663170 ubuntu.go:177] setting up certificates
I0120 12:26:56.526038 663170 provision.go:84] configureAuth start
I0120 12:26:56.526099 663170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-618033
I0120 12:26:56.543361 663170 provision.go:143] copyHostCerts
I0120 12:26:56.543430 663170 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem, removing ...
I0120 12:26:56.543444 663170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem
I0120 12:26:56.543518 663170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem (1082 bytes)
I0120 12:26:56.543638 663170 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem, removing ...
I0120 12:26:56.543656 663170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem
I0120 12:26:56.543686 663170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem (1123 bytes)
I0120 12:26:56.543745 663170 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem, removing ...
I0120 12:26:56.543753 663170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem
I0120 12:26:56.543779 663170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem (1675 bytes)
I0120 12:26:56.543835 663170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-618033 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-618033]
I0120 12:26:56.797916 663170 provision.go:177] copyRemoteCerts
I0120 12:26:56.797993 663170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 12:26:56.798050 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:56.816238 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:56.910907 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0120 12:26:56.935780 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0120 12:26:56.964919 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0120 12:26:56.991221 663170 provision.go:87] duration metric: took 465.16926ms to configureAuth
I0120 12:26:56.991254 663170 ubuntu.go:193] setting minikube options for container-runtime
I0120 12:26:56.991464 663170 config.go:182] Loaded profile config "old-k8s-version-618033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 12:26:56.991478 663170 machine.go:96] duration metric: took 3.91262644s to provisionDockerMachine
I0120 12:26:56.991486 663170 start.go:293] postStartSetup for "old-k8s-version-618033" (driver="docker")
I0120 12:26:56.991497 663170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 12:26:56.991553 663170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 12:26:56.991599 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:57.028683 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:57.123636 663170 ssh_runner.go:195] Run: cat /etc/os-release
I0120 12:26:57.127074 663170 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 12:26:57.127114 663170 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 12:26:57.127134 663170 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 12:26:57.127142 663170 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0120 12:26:57.127153 663170 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-446459/.minikube/addons for local assets ...
I0120 12:26:57.127215 663170 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-446459/.minikube/files for local assets ...
I0120 12:26:57.127308 663170 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem -> 4518352.pem in /etc/ssl/certs
I0120 12:26:57.127434 663170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 12:26:57.136651 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem --> /etc/ssl/certs/4518352.pem (1708 bytes)
I0120 12:26:57.163857 663170 start.go:296] duration metric: took 172.353896ms for postStartSetup
I0120 12:26:57.163948 663170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 12:26:57.163993 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:57.182335 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:57.274452 663170 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0120 12:26:57.278997 663170 fix.go:56] duration metric: took 4.573235899s for fixHost
I0120 12:26:57.279022 663170 start.go:83] releasing machines lock for "old-k8s-version-618033", held for 4.573308096s
I0120 12:26:57.279096 663170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-618033
I0120 12:26:57.296365 663170 ssh_runner.go:195] Run: cat /version.json
I0120 12:26:57.296424 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:57.296703 663170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 12:26:57.296754 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:57.314709 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:57.332055 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:57.409216 663170 ssh_runner.go:195] Run: systemctl --version
I0120 12:26:57.541977 663170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0120 12:26:57.546985 663170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0120 12:26:57.567729 663170 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0120 12:26:57.567825 663170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 12:26:57.576894 663170 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0120 12:26:57.576927 663170 start.go:495] detecting cgroup driver to use...
I0120 12:26:57.576960 663170 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0120 12:26:57.577022 663170 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 12:26:57.591237 663170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 12:26:57.611185 663170 docker.go:217] disabling cri-docker service (if available) ...
I0120 12:26:57.611281 663170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 12:26:57.624724 663170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 12:26:57.637354 663170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 12:26:57.739028 663170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 12:26:57.871908 663170 docker.go:233] disabling docker service ...
I0120 12:26:57.871984 663170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 12:26:57.888145 663170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 12:26:57.904839 663170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 12:26:58.033309 663170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 12:26:58.146945 663170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 12:26:58.164074 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 12:26:58.185362 663170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0120 12:26:58.198087 663170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 12:26:58.209764 663170 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 12:26:58.209890 663170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 12:26:58.222329 663170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 12:26:58.233220 663170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 12:26:58.243859 663170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 12:26:58.254316 663170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 12:26:58.263730 663170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 12:26:58.274389 663170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 12:26:58.283291 663170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 12:26:58.291912 663170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:26:58.381082 663170 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 12:26:58.543727 663170 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 12:26:58.543801 663170 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 12:26:58.547896 663170 start.go:563] Will wait 60s for crictl version
I0120 12:26:58.548012 663170 ssh_runner.go:195] Run: which crictl
I0120 12:26:58.552300 663170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 12:26:58.595990 663170 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0120 12:26:58.596072 663170 ssh_runner.go:195] Run: containerd --version
I0120 12:26:58.622667 663170 ssh_runner.go:195] Run: containerd --version
I0120 12:26:58.648645 663170 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
I0120 12:26:58.651904 663170 cli_runner.go:164] Run: docker network inspect old-k8s-version-618033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 12:26:58.672645 663170 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0120 12:26:58.676682 663170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 12:26:58.688048 663170 kubeadm.go:883] updating cluster {Name:old-k8s-version-618033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 12:26:58.688177 663170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 12:26:58.688237 663170 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:26:58.725695 663170 containerd.go:627] all images are preloaded for containerd runtime.
I0120 12:26:58.725720 663170 containerd.go:534] Images already preloaded, skipping extraction
I0120 12:26:58.725805 663170 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:26:58.771363 663170 containerd.go:627] all images are preloaded for containerd runtime.
I0120 12:26:58.771392 663170 cache_images.go:84] Images are preloaded, skipping loading
I0120 12:26:58.771402 663170 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
I0120 12:26:58.771539 663170 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-618033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 12:26:58.771614 663170 ssh_runner.go:195] Run: sudo crictl info
I0120 12:26:58.811582 663170 cni.go:84] Creating CNI manager for ""
I0120 12:26:58.811611 663170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 12:26:58.811623 663170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 12:26:58.811675 663170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-618033 NodeName:old-k8s-version-618033 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0120 12:26:58.811844 663170 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-618033"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 12:26:58.811945 663170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0120 12:26:58.821057 663170 binaries.go:44] Found k8s binaries, skipping transfer
I0120 12:26:58.821153 663170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 12:26:58.830239 663170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0120 12:26:58.854688 663170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 12:26:58.873324 663170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0120 12:26:58.892395 663170 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0120 12:26:58.895899 663170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 12:26:58.907300 663170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:26:59.046687 663170 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 12:26:59.065284 663170 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033 for IP: 192.168.85.2
I0120 12:26:59.065311 663170 certs.go:194] generating shared ca certs ...
I0120 12:26:59.065328 663170 certs.go:226] acquiring lock for ca certs: {Name:mkcccec907119c13813a959b3b756156d7101c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:26:59.065535 663170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-446459/.minikube/ca.key
I0120 12:26:59.065620 663170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.key
I0120 12:26:59.065638 663170 certs.go:256] generating profile certs ...
I0120 12:26:59.065739 663170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.key
I0120 12:26:59.065861 663170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/apiserver.key.a7955a31
I0120 12:26:59.065930 663170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/proxy-client.key
I0120 12:26:59.066084 663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835.pem (1338 bytes)
W0120 12:26:59.066136 663170 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835_empty.pem, impossibly tiny 0 bytes
I0120 12:26:59.066150 663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem (1675 bytes)
I0120 12:26:59.066191 663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem (1082 bytes)
I0120 12:26:59.066233 663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem (1123 bytes)
I0120 12:26:59.066267 663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem (1675 bytes)
I0120 12:26:59.066313 663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem (1708 bytes)
I0120 12:26:59.067141 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 12:26:59.099113 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 12:26:59.133809 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 12:26:59.194639 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 12:26:59.229900 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0120 12:26:59.261290 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0120 12:26:59.289275 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 12:26:59.314283 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0120 12:26:59.339248 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem --> /usr/share/ca-certificates/4518352.pem (1708 bytes)
I0120 12:26:59.365723 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 12:26:59.390722 663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835.pem --> /usr/share/ca-certificates/451835.pem (1338 bytes)
I0120 12:26:59.415267 663170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 12:26:59.433136 663170 ssh_runner.go:195] Run: openssl version
I0120 12:26:59.439087 663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4518352.pem && ln -fs /usr/share/ca-certificates/4518352.pem /etc/ssl/certs/4518352.pem"
I0120 12:26:59.449175 663170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4518352.pem
I0120 12:26:59.452764 663170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:45 /usr/share/ca-certificates/4518352.pem
I0120 12:26:59.452853 663170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4518352.pem
I0120 12:26:59.459802 663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4518352.pem /etc/ssl/certs/3ec20f2e.0"
I0120 12:26:59.469014 663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 12:26:59.478543 663170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 12:26:59.482479 663170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:38 /usr/share/ca-certificates/minikubeCA.pem
I0120 12:26:59.482549 663170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 12:26:59.490026 663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 12:26:59.499564 663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/451835.pem && ln -fs /usr/share/ca-certificates/451835.pem /etc/ssl/certs/451835.pem"
I0120 12:26:59.509525 663170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/451835.pem
I0120 12:26:59.513258 663170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:45 /usr/share/ca-certificates/451835.pem
I0120 12:26:59.513330 663170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/451835.pem
I0120 12:26:59.520584 663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/451835.pem /etc/ssl/certs/51391683.0"
I0120 12:26:59.530120 663170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 12:26:59.534495 663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0120 12:26:59.542138 663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0120 12:26:59.549707 663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0120 12:26:59.558567 663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0120 12:26:59.566528 663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0120 12:26:59.576313 663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0120 12:26:59.585436 663170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-618033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:26:59.585527 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 12:26:59.585683 663170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 12:26:59.629925 663170 cri.go:89] found id: "3513d77a54b31bffdcc1bbcf5c23a22ceb456d92983f2bf891fef527b1e11c79"
I0120 12:26:59.629959 663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:26:59.629965 663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:26:59.629969 663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:26:59.629972 663170 cri.go:89] found id: "9356782751c42b29ae874fda487e04d94022a03286f14a2f8339eba1d542c7f1"
I0120 12:26:59.629976 663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:26:59.630000 663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:26:59.630012 663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:26:59.630015 663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:26:59.630026 663170 cri.go:89] found id: ""
I0120 12:26:59.630091 663170 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0120 12:26:59.642447 663170 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T12:26:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0120 12:26:59.642521 663170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 12:26:59.652089 663170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0120 12:26:59.652108 663170 kubeadm.go:593] restartPrimaryControlPlane start ...
I0120 12:26:59.652187 663170 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0120 12:26:59.660764 663170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0120 12:26:59.661365 663170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-618033" does not appear in /home/jenkins/minikube-integration/20151-446459/kubeconfig
I0120 12:26:59.661731 663170 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-446459/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-618033" cluster setting kubeconfig missing "old-k8s-version-618033" context setting]
I0120 12:26:59.662202 663170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/kubeconfig: {Name:mkd202431392e920a92afeece62697072b25ee29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:26:59.663634 663170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0120 12:26:59.672553 663170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0120 12:26:59.672641 663170 kubeadm.go:597] duration metric: took 20.524986ms to restartPrimaryControlPlane
I0120 12:26:59.672658 663170 kubeadm.go:394] duration metric: took 87.229837ms to StartCluster
I0120 12:26:59.672675 663170 settings.go:142] acquiring lock: {Name:mka92edde1befc8914a01871e41167ef1a7b90c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:26:59.672749 663170 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20151-446459/kubeconfig
I0120 12:26:59.673666 663170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/kubeconfig: {Name:mkd202431392e920a92afeece62697072b25ee29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:26:59.673908 663170 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 12:26:59.674285 663170 config.go:182] Loaded profile config "old-k8s-version-618033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 12:26:59.674355 663170 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 12:26:59.674441 663170 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-618033"
I0120 12:26:59.674449 663170 addons.go:69] Setting dashboard=true in profile "old-k8s-version-618033"
I0120 12:26:59.674456 663170 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-618033"
W0120 12:26:59.674463 663170 addons.go:247] addon storage-provisioner should already be in state true
I0120 12:26:59.674466 663170 addons.go:238] Setting addon dashboard=true in "old-k8s-version-618033"
W0120 12:26:59.674473 663170 addons.go:247] addon dashboard should already be in state true
I0120 12:26:59.674488 663170 host.go:66] Checking if "old-k8s-version-618033" exists ...
I0120 12:26:59.674494 663170 host.go:66] Checking if "old-k8s-version-618033" exists ...
I0120 12:26:59.674927 663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
I0120 12:26:59.675089 663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
I0120 12:26:59.675574 663170 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-618033"
I0120 12:26:59.675602 663170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-618033"
I0120 12:26:59.675886 663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
I0120 12:26:59.679790 663170 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-618033"
I0120 12:26:59.679820 663170 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-618033"
W0120 12:26:59.679828 663170 addons.go:247] addon metrics-server should already be in state true
I0120 12:26:59.679862 663170 host.go:66] Checking if "old-k8s-version-618033" exists ...
I0120 12:26:59.679917 663170 out.go:177] * Verifying Kubernetes components...
I0120 12:26:59.680333 663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
I0120 12:26:59.685904 663170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:26:59.725715 663170 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 12:26:59.732707 663170 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 12:26:59.736225 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 12:26:59.736254 663170 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 12:26:59.736338 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:59.753277 663170 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 12:26:59.756593 663170 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:26:59.756616 663170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 12:26:59.756683 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:59.761139 663170 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 12:26:59.769706 663170 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 12:26:59.769761 663170 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 12:26:59.769835 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:59.782146 663170 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-618033"
W0120 12:26:59.782171 663170 addons.go:247] addon default-storageclass should already be in state true
I0120 12:26:59.782197 663170 host.go:66] Checking if "old-k8s-version-618033" exists ...
I0120 12:26:59.782611 663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
I0120 12:26:59.818964 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:59.833932 663170 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 12:26:59.855112 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:59.856845 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:59.865501 663170 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 12:26:59.865526 663170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 12:26:59.865606 663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
I0120 12:26:59.900712 663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
I0120 12:26:59.902855 663170 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-618033" to be "Ready" ...
I0120 12:26:59.967635 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:27:00.010029 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 12:27:00.010058 663170 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 12:27:00.050914 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 12:27:00.055656 663170 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 12:27:00.055738 663170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 12:27:00.095443 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 12:27:00.095526 663170 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 12:27:00.104190 663170 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 12:27:00.104290 663170 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
W0120 12:27:00.169211 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.169422 663170 retry.go:31] will retry after 371.274924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.176423 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 12:27:00.176506 663170 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0120 12:27:00.176621 663170 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 12:27:00.176647 663170 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 12:27:00.221713 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 12:27:00.221777 663170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 12:27:00.223969 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 12:27:00.254834 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.254938 663170 retry.go:31] will retry after 284.770624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.266988 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 12:27:00.267099 663170 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0120 12:27:00.302692 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 12:27:00.302790 663170 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 12:27:00.328890 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 12:27:00.328975 663170 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 12:27:00.360106 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 12:27:00.360185 663170 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0120 12:27:00.364469 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.364512 663170 retry.go:31] will retry after 323.50775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.386644 663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 12:27:00.386672 663170 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0120 12:27:00.408309 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 12:27:00.486324 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.486372 663170 retry.go:31] will retry after 137.676303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.540528 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 12:27:00.541065 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:27:00.624878 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 12:27:00.688601 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 12:27:00.887225 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.887257 663170 retry.go:31] will retry after 234.141798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 12:27:00.887310 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.887316 663170 retry.go:31] will retry after 476.099981ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 12:27:00.887352 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.887359 663170 retry.go:31] will retry after 538.180139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 12:27:00.974577 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:00.974615 663170 retry.go:31] will retry after 415.665863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:01.122069 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 12:27:01.211110 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:01.211148 663170 retry.go:31] will retry after 829.770354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:01.364404 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:27:01.390708 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 12:27:01.425847 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 12:27:01.556576 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:01.556722 663170 retry.go:31] will retry after 744.642747ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 12:27:01.556805 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:01.556823 663170 retry.go:31] will retry after 690.057704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 12:27:01.583210 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:01.583247 663170 retry.go:31] will retry after 405.23663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:01.903941 663170 node_ready.go:53] error getting node "old-k8s-version-618033": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-618033": dial tcp 192.168.85.2:8443: connect: connection refused
I0120 12:27:01.989334 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 12:27:02.041888 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 12:27:02.081278 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:02.081320 663170 retry.go:31] will retry after 941.982727ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 12:27:02.135113 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:02.135151 663170 retry.go:31] will retry after 450.629194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:02.247518 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 12:27:02.302190 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 12:27:02.336794 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:02.336825 663170 retry.go:31] will retry after 1.174787233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 12:27:02.384162 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:02.384203 663170 retry.go:31] will retry after 1.183556458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:02.586303 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 12:27:02.662260 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:02.662322 663170 retry.go:31] will retry after 1.413480727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:03.023957 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 12:27:03.117337 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:03.117374 663170 retry.go:31] will retry after 1.679499016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:03.512572 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 12:27:03.568166 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 12:27:03.595069 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:03.595103 663170 retry.go:31] will retry after 839.161021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0120 12:27:03.648816 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:03.648850 663170 retry.go:31] will retry after 675.766125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:04.076687 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 12:27:04.147761 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:04.147798 663170 retry.go:31] will retry after 2.80568394s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:04.324894 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 12:27:04.402816 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:04.402849 663170 retry.go:31] will retry after 951.259554ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:04.403274 663170 node_ready.go:53] error getting node "old-k8s-version-618033": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-618033": dial tcp 192.168.85.2:8443: connect: connection refused
I0120 12:27:04.434561 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 12:27:04.508033 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:04.508071 663170 retry.go:31] will retry after 2.20700865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:04.797515 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 12:27:04.880328 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:04.880365 663170 retry.go:31] will retry after 1.302846526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:05.354329 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 12:27:05.433734 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:05.433767 663170 retry.go:31] will retry after 2.730402576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:06.184042 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0120 12:27:06.269689 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:06.269725 663170 retry.go:31] will retry after 4.230833571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:06.404455 663170 node_ready.go:53] error getting node "old-k8s-version-618033": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-618033": dial tcp 192.168.85.2:8443: connect: connection refused
I0120 12:27:06.716656 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0120 12:27:06.791000 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:06.791037 663170 retry.go:31] will retry after 3.873216238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:06.953926 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0120 12:27:07.026909 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:07.026946 663170 retry.go:31] will retry after 2.249294821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:08.164697 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0120 12:27:08.334716 663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:08.334750 663170 retry.go:31] will retry after 5.697542223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0120 12:27:09.277324 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0120 12:27:10.500793 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 12:27:10.665214 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 12:27:14.033411 663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:27:17.358066 663170 node_ready.go:49] node "old-k8s-version-618033" has status "Ready":"True"
I0120 12:27:17.358089 663170 node_ready.go:38] duration metric: took 17.455197236s for node "old-k8s-version-618033" to be "Ready" ...
I0120 12:27:17.358099 663170 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:27:17.441607 663170 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-vjbl2" in "kube-system" namespace to be "Ready" ...
I0120 12:27:17.605819 663170 pod_ready.go:93] pod "coredns-74ff55c5b-vjbl2" in "kube-system" namespace has status "Ready":"True"
I0120 12:27:17.605897 663170 pod_ready.go:82] duration metric: took 164.199283ms for pod "coredns-74ff55c5b-vjbl2" in "kube-system" namespace to be "Ready" ...
I0120 12:27:17.605924 663170 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
I0120 12:27:17.640382 663170 pod_ready.go:93] pod "etcd-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"True"
I0120 12:27:17.640462 663170 pod_ready.go:82] duration metric: took 34.514694ms for pod "etcd-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
I0120 12:27:17.640495 663170 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
I0120 12:27:18.311838 663170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.034473628s)
I0120 12:27:18.639698 663170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.138858793s)
I0120 12:27:18.640080 663170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.974836712s)
I0120 12:27:18.640223 663170 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-618033"
I0120 12:27:18.640180 663170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.606733796s)
I0120 12:27:18.641489 663170 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-618033 addons enable metrics-server
I0120 12:27:18.642854 663170 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0120 12:27:18.644180 663170 addons.go:514] duration metric: took 18.969824526s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0120 12:27:19.648084 663170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:22.146943 663170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:24.151010 663170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:26.646613 663170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:27.146609 663170 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"True"
I0120 12:27:27.146637 663170 pod_ready.go:82] duration metric: took 9.506120108s for pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
I0120 12:27:27.146653 663170 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
I0120 12:27:29.155666 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:31.156289 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:33.657736 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:35.665731 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:38.158101 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:40.654646 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:43.155925 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:45.158031 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:47.657813 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:50.165941 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:52.654334 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:55.154290 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:57.653678 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:59.658499 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:02.153839 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:04.153924 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:06.656230 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:09.152931 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:11.155033 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:13.652567 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:15.653794 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:17.656923 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:20.153794 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:22.154276 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:24.652666 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:26.653240 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:28.653792 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:31.154010 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:33.652452 663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:35.652633 663170 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"True"
I0120 12:28:35.652657 663170 pod_ready.go:82] duration metric: took 1m8.505964238s for pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
I0120 12:28:35.652670 663170 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q2cdx" in "kube-system" namespace to be "Ready" ...
I0120 12:28:35.658056 663170 pod_ready.go:93] pod "kube-proxy-q2cdx" in "kube-system" namespace has status "Ready":"True"
I0120 12:28:35.658082 663170 pod_ready.go:82] duration metric: took 5.404269ms for pod "kube-proxy-q2cdx" in "kube-system" namespace to be "Ready" ...
I0120 12:28:35.658095 663170 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
I0120 12:28:37.665722 663170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:40.165049 663170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:42.172985 663170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:42.664829 663170 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"True"
I0120 12:28:42.664859 663170 pod_ready.go:82] duration metric: took 7.006756186s for pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
I0120 12:28:42.664872 663170 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace to be "Ready" ...
I0120 12:28:44.671983 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:47.170635 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:49.172183 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:51.675108 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:53.675558 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:56.171952 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:58.175246 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:00.192941 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:02.671331 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:04.675940 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:07.171664 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:09.175340 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:11.671720 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:14.170833 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:16.172791 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:18.671698 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:21.171053 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:23.175126 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:25.670895 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:27.671401 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:30.176056 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:32.671628 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:34.675586 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:37.171351 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:39.171662 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:41.671072 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:43.671381 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:46.170595 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:48.175411 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:50.671807 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:53.177132 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:55.670815 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:57.670978 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:59.671514 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:01.672223 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:04.172077 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:06.671344 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:08.677088 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:11.172276 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:13.671449 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:16.172081 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:18.671313 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:20.671579 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:22.672719 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:25.172188 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:27.174389 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:29.671863 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:32.171468 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:34.671534 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:36.671809 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:39.171372 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:41.172094 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:43.177396 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:45.674333 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:48.171995 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:50.670614 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:52.671705 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:54.673264 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:57.170991 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:59.171272 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:01.172964 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:03.671222 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:06.171532 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:08.171887 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:10.172406 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:12.671553 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:15.172651 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:17.671240 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:19.672000 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:22.171027 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:24.171468 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:26.672609 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:29.172730 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:31.670890 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:33.671955 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:36.171491 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:38.172489 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:40.671650 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:43.171738 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:45.182052 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:47.672271 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:50.170716 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:52.172555 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:54.670689 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:56.675262 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:31:58.675720 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:01.172517 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:03.670815 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:06.172162 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:08.672118 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:11.172318 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:13.173945 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:15.176805 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:17.673799 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:20.173022 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:22.174529 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:24.671226 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:26.671707 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:28.676084 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:31.173025 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:33.175082 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:35.672357 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:38.172501 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:40.172959 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:42.174961 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:42.665891 663170 pod_ready.go:82] duration metric: took 4m0.000999177s for pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace to be "Ready" ...
E0120 12:32:42.665923 663170 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 12:32:42.665934 663170 pod_ready.go:39] duration metric: took 5m25.307823459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:32:42.665953 663170 api_server.go:52] waiting for apiserver process to appear ...
I0120 12:32:42.665985 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 12:32:42.666060 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 12:32:42.761425 663170 cri.go:89] found id: "5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
I0120 12:32:42.761457 663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:32:42.761464 663170 cri.go:89] found id: ""
I0120 12:32:42.761472 663170 logs.go:282] 2 containers: [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15]
I0120 12:32:42.761530 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.766334 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.770402 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 12:32:42.770477 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 12:32:42.840870 663170 cri.go:89] found id: "d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
I0120 12:32:42.840890 663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:32:42.840895 663170 cri.go:89] found id: ""
I0120 12:32:42.840902 663170 logs.go:282] 2 containers: [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf]
I0120 12:32:42.840959 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.846031 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.850194 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 12:32:42.850260 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 12:32:42.904928 663170 cri.go:89] found id: "b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
I0120 12:32:42.904957 663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:32:42.904963 663170 cri.go:89] found id: ""
I0120 12:32:42.904970 663170 logs.go:282] 2 containers: [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075]
I0120 12:32:42.905025 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.909172 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.912704 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 12:32:42.912772 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 12:32:42.968944 663170 cri.go:89] found id: "d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
I0120 12:32:42.969015 663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:32:42.969035 663170 cri.go:89] found id: ""
I0120 12:32:42.969061 663170 logs.go:282] 2 containers: [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0]
I0120 12:32:42.969168 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.973579 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.978112 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 12:32:42.978252 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 12:32:43.050120 663170 cri.go:89] found id: "3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
I0120 12:32:43.050196 663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:32:43.050216 663170 cri.go:89] found id: ""
I0120 12:32:43.050241 663170 logs.go:282] 2 containers: [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03]
I0120 12:32:43.050338 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.054664 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.058589 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 12:32:43.058720 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 12:32:43.117777 663170 cri.go:89] found id: "beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
I0120 12:32:43.117802 663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:32:43.117807 663170 cri.go:89] found id: ""
I0120 12:32:43.117814 663170 logs.go:282] 2 containers: [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2]
I0120 12:32:43.117901 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.126390 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.136897 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 12:32:43.137072 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 12:32:43.200437 663170 cri.go:89] found id: "a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
I0120 12:32:43.200515 663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:32:43.200538 663170 cri.go:89] found id: ""
I0120 12:32:43.200565 663170 logs.go:282] 2 containers: [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10]
I0120 12:32:43.200662 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.204950 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.208929 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 12:32:43.209037 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 12:32:43.259134 663170 cri.go:89] found id: "d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
I0120 12:32:43.259192 663170 cri.go:89] found id: ""
I0120 12:32:43.259224 663170 logs.go:282] 1 containers: [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6]
I0120 12:32:43.259308 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.263374 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 12:32:43.263497 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 12:32:43.311336 663170 cri.go:89] found id: "2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
I0120 12:32:43.311398 663170 cri.go:89] found id: "fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
I0120 12:32:43.311427 663170 cri.go:89] found id: ""
I0120 12:32:43.311452 663170 logs.go:282] 2 containers: [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224]
I0120 12:32:43.311549 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.315630 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.319342 663170 logs.go:123] Gathering logs for kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] ...
I0120 12:32:43.319422 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
I0120 12:32:43.372921 663170 logs.go:123] Gathering logs for storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] ...
I0120 12:32:43.373003 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
I0120 12:32:43.427917 663170 logs.go:123] Gathering logs for containerd ...
I0120 12:32:43.427995 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 12:32:43.498070 663170 logs.go:123] Gathering logs for container status ...
I0120 12:32:43.498147 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 12:32:43.571418 663170 logs.go:123] Gathering logs for kubelet ...
I0120 12:32:43.571498 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 12:32:43.636420 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.205553 655 reflector.go:138] object-"kube-system"/"coredns-token-brbgd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-brbgd" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.636782 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.218010 655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.637034 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.305944 655 reflector.go:138] object-"kube-system"/"metrics-server-token-t7n5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t7n5d" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.637271 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306033 655 reflector.go:138] object-"kube-system"/"kindnet-token-htldq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-htldq" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.637517 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306082 655 reflector.go:138] object-"kube-system"/"kube-proxy-token-85wbm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85wbm" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.637864 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306131 655 reflector.go:138] object-"default"/"default-token-pngw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pngw5" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.638104 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306180 655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.638355 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306224 655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fgdsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fgdsf" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.646625 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.721324 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.646855 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.754709 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.650337 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:35 old-k8s-version-618033 kubelet[655]: E0120 12:27:35.610171 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.652382 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.614294 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.652979 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.976430 655 pod_workers.go:191] Error syncing pod 7614f8ae-aae6-4203-96ff-40a900278cf6 ("storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"
W0120 12:32:43.653464 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:50 old-k8s-version-618033 kubelet[655]: E0120 12:27:50.989213 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.653826 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:51 old-k8s-version-618033 kubelet[655]: E0120 12:27:51.992870 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.654514 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:58 old-k8s-version-618033 kubelet[655]: E0120 12:27:58.673483 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.656989 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:00 old-k8s-version-618033 kubelet[655]: E0120 12:28:00.612213 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.657747 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:12 old-k8s-version-618033 kubelet[655]: E0120 12:28:12.084652 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.657954 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:13 old-k8s-version-618033 kubelet[655]: E0120 12:28:13.597787 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.658300 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:18 old-k8s-version-618033 kubelet[655]: E0120 12:28:18.673267 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.658511 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:26 old-k8s-version-618033 kubelet[655]: E0120 12:28:26.596529 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.658872 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:30 old-k8s-version-618033 kubelet[655]: E0120 12:28:30.596223 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.659081 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:40 old-k8s-version-618033 kubelet[655]: E0120 12:28:40.596632 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.659690 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:43 old-k8s-version-618033 kubelet[655]: E0120 12:28:43.165668 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.660040 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:48 old-k8s-version-618033 kubelet[655]: E0120 12:28:48.673706 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.662593 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:51 old-k8s-version-618033 kubelet[655]: E0120 12:28:51.610383 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.663687 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:03 old-k8s-version-618033 kubelet[655]: E0120 12:29:03.602577 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.664062 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:04 old-k8s-version-618033 kubelet[655]: E0120 12:29:04.596213 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.664277 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:17 old-k8s-version-618033 kubelet[655]: E0120 12:29:17.597227 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.664622 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:18 old-k8s-version-618033 kubelet[655]: E0120 12:29:18.596696 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.664827 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:30 old-k8s-version-618033 kubelet[655]: E0120 12:29:30.596660 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.665441 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:33 old-k8s-version-618033 kubelet[655]: E0120 12:29:33.299251 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.665804 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:38 old-k8s-version-618033 kubelet[655]: E0120 12:29:38.673765 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.666010 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:42 old-k8s-version-618033 kubelet[655]: E0120 12:29:42.596621 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.666361 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:49 old-k8s-version-618033 kubelet[655]: E0120 12:29:49.596280 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.666576 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:57 old-k8s-version-618033 kubelet[655]: E0120 12:29:57.598023 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.666934 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:03 old-k8s-version-618033 kubelet[655]: E0120 12:30:03.596329 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.667150 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:10 old-k8s-version-618033 kubelet[655]: E0120 12:30:10.596520 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.667504 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:14 old-k8s-version-618033 kubelet[655]: E0120 12:30:14.596119 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.670002 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:22 old-k8s-version-618033 kubelet[655]: E0120 12:30:22.605228 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.670427 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:28 old-k8s-version-618033 kubelet[655]: E0120 12:30:28.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.670640 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:36 old-k8s-version-618033 kubelet[655]: E0120 12:30:36.596791 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.670990 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:41 old-k8s-version-618033 kubelet[655]: E0120 12:30:41.596271 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.671204 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:49 old-k8s-version-618033 kubelet[655]: E0120 12:30:49.600904 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.671841 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:54 old-k8s-version-618033 kubelet[655]: E0120 12:30:54.524938 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.672233 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:58 old-k8s-version-618033 kubelet[655]: E0120 12:30:58.673184 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.672442 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:00 old-k8s-version-618033 kubelet[655]: E0120 12:31:00.596643 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.672687 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:12 old-k8s-version-618033 kubelet[655]: E0120 12:31:12.596594 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.673044 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:13 old-k8s-version-618033 kubelet[655]: E0120 12:31:13.596456 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.673252 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:24 old-k8s-version-618033 kubelet[655]: E0120 12:31:24.596574 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.673646 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:25 old-k8s-version-618033 kubelet[655]: E0120 12:31:25.596263 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.673853 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.674203 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.674413 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.674775 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.674982 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.675330 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.675536 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.675891 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.676100 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.676456 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.676672 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 12:32:43.676711 663170 logs.go:123] Gathering logs for kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] ...
I0120 12:32:43.676747 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:32:43.743941 663170 logs.go:123] Gathering logs for kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] ...
I0120 12:32:43.743981 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:32:43.806114 663170 logs.go:123] Gathering logs for kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] ...
I0120 12:32:43.806150 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:32:43.857911 663170 logs.go:123] Gathering logs for kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] ...
I0120 12:32:43.857941 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:32:43.917003 663170 logs.go:123] Gathering logs for kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] ...
I0120 12:32:43.917032 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
I0120 12:32:43.992709 663170 logs.go:123] Gathering logs for etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] ...
I0120 12:32:43.992759 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
I0120 12:32:44.068689 663170 logs.go:123] Gathering logs for coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] ...
I0120 12:32:44.068723 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
I0120 12:32:44.123499 663170 logs.go:123] Gathering logs for kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] ...
I0120 12:32:44.123529 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
I0120 12:32:44.181810 663170 logs.go:123] Gathering logs for dmesg ...
I0120 12:32:44.181838 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 12:32:44.204612 663170 logs.go:123] Gathering logs for etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] ...
I0120 12:32:44.204641 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:32:44.262671 663170 logs.go:123] Gathering logs for kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] ...
I0120 12:32:44.262704 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
I0120 12:32:44.313537 663170 logs.go:123] Gathering logs for storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] ...
I0120 12:32:44.313569 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
I0120 12:32:44.385646 663170 logs.go:123] Gathering logs for kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] ...
I0120 12:32:44.385744 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:32:44.474032 663170 logs.go:123] Gathering logs for describe nodes ...
I0120 12:32:44.474111 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 12:32:44.677528 663170 logs.go:123] Gathering logs for coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] ...
I0120 12:32:44.677562 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:32:44.721616 663170 logs.go:123] Gathering logs for kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] ...
I0120 12:32:44.721690 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
I0120 12:32:44.768059 663170 logs.go:123] Gathering logs for kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] ...
I0120 12:32:44.768141 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
I0120 12:32:44.829786 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:44.829821 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 12:32:44.829881 663170 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0120 12:32:44.829892 663170 out.go:270] Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:44.829917 663170 out.go:270] Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:44.829953 663170 out.go:270] Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:44.829961 663170 out.go:270] Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:44.829967 663170 out.go:270] Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 12:32:44.829972 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:44.829979 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:32:54.831056 663170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 12:32:54.842999 663170 api_server.go:72] duration metric: took 5m55.169056051s to wait for apiserver process to appear ...
I0120 12:32:54.843025 663170 api_server.go:88] waiting for apiserver healthz status ...
I0120 12:32:54.843060 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 12:32:54.843120 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 12:32:54.892331 663170 cri.go:89] found id: "5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
I0120 12:32:54.892355 663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:32:54.892360 663170 cri.go:89] found id: ""
I0120 12:32:54.892367 663170 logs.go:282] 2 containers: [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15]
I0120 12:32:54.892424 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:54.896167 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:54.899483 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 12:32:54.899551 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 12:32:54.947556 663170 cri.go:89] found id: "d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
I0120 12:32:54.947585 663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:32:54.947591 663170 cri.go:89] found id: ""
I0120 12:32:54.947598 663170 logs.go:282] 2 containers: [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf]
I0120 12:32:54.947656 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:54.951481 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:54.955038 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 12:32:54.955113 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 12:32:54.999061 663170 cri.go:89] found id: "b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
I0120 12:32:54.999094 663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:32:54.999099 663170 cri.go:89] found id: ""
I0120 12:32:54.999106 663170 logs.go:282] 2 containers: [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075]
I0120 12:32:54.999164 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.003398 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.006791 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 12:32:55.006865 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 12:32:55.053724 663170 cri.go:89] found id: "d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
I0120 12:32:55.053750 663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:32:55.053755 663170 cri.go:89] found id: ""
I0120 12:32:55.053763 663170 logs.go:282] 2 containers: [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0]
I0120 12:32:55.053826 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.057957 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.061739 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 12:32:55.061865 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 12:32:55.112602 663170 cri.go:89] found id: "3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
I0120 12:32:55.112625 663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:32:55.112631 663170 cri.go:89] found id: ""
I0120 12:32:55.112638 663170 logs.go:282] 2 containers: [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03]
I0120 12:32:55.112718 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.116611 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.121704 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 12:32:55.121779 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 12:32:55.181387 663170 cri.go:89] found id: "beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
I0120 12:32:55.181409 663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:32:55.181414 663170 cri.go:89] found id: ""
I0120 12:32:55.181421 663170 logs.go:282] 2 containers: [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2]
I0120 12:32:55.181497 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.186863 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.191042 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 12:32:55.191113 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 12:32:55.244409 663170 cri.go:89] found id: "a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
I0120 12:32:55.244442 663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:32:55.244449 663170 cri.go:89] found id: ""
I0120 12:32:55.244456 663170 logs.go:282] 2 containers: [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10]
I0120 12:32:55.244522 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.253198 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.260336 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 12:32:55.260427 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 12:32:55.307825 663170 cri.go:89] found id: "2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
I0120 12:32:55.307847 663170 cri.go:89] found id: "fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
I0120 12:32:55.307851 663170 cri.go:89] found id: ""
I0120 12:32:55.307858 663170 logs.go:282] 2 containers: [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224]
I0120 12:32:55.307925 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.311753 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.315323 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 12:32:55.315404 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 12:32:55.356240 663170 cri.go:89] found id: "d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
I0120 12:32:55.356269 663170 cri.go:89] found id: ""
I0120 12:32:55.356277 663170 logs.go:282] 1 containers: [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6]
I0120 12:32:55.356345 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.359958 663170 logs.go:123] Gathering logs for kubelet ...
I0120 12:32:55.359984 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 12:32:55.418304 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.205553 655 reflector.go:138] object-"kube-system"/"coredns-token-brbgd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-brbgd" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.418614 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.218010 655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.418849 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.305944 655 reflector.go:138] object-"kube-system"/"metrics-server-token-t7n5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t7n5d" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419071 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306033 655 reflector.go:138] object-"kube-system"/"kindnet-token-htldq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-htldq" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419291 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306082 655 reflector.go:138] object-"kube-system"/"kube-proxy-token-85wbm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85wbm" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419546 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306131 655 reflector.go:138] object-"default"/"default-token-pngw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pngw5" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419756 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306180 655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419984 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306224 655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fgdsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fgdsf" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.428109 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.721324 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.428309 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.754709 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.431722 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:35 old-k8s-version-618033 kubelet[655]: E0120 12:27:35.610171 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.433745 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.614294 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.434318 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.976430 655 pod_workers.go:191] Error syncing pod 7614f8ae-aae6-4203-96ff-40a900278cf6 ("storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"
W0120 12:32:55.434787 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:50 old-k8s-version-618033 kubelet[655]: E0120 12:27:50.989213 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.435118 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:51 old-k8s-version-618033 kubelet[655]: E0120 12:27:51.992870 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.435792 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:58 old-k8s-version-618033 kubelet[655]: E0120 12:27:58.673483 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.438245 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:00 old-k8s-version-618033 kubelet[655]: E0120 12:28:00.612213 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.438973 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:12 old-k8s-version-618033 kubelet[655]: E0120 12:28:12.084652 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.439157 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:13 old-k8s-version-618033 kubelet[655]: E0120 12:28:13.597787 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.439485 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:18 old-k8s-version-618033 kubelet[655]: E0120 12:28:18.673267 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.439669 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:26 old-k8s-version-618033 kubelet[655]: E0120 12:28:26.596529 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.439998 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:30 old-k8s-version-618033 kubelet[655]: E0120 12:28:30.596223 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.440181 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:40 old-k8s-version-618033 kubelet[655]: E0120 12:28:40.596632 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.440772 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:43 old-k8s-version-618033 kubelet[655]: E0120 12:28:43.165668 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.441099 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:48 old-k8s-version-618033 kubelet[655]: E0120 12:28:48.673706 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.443603 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:51 old-k8s-version-618033 kubelet[655]: E0120 12:28:51.610383 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.443790 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:03 old-k8s-version-618033 kubelet[655]: E0120 12:29:03.602577 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.444120 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:04 old-k8s-version-618033 kubelet[655]: E0120 12:29:04.596213 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.444327 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:17 old-k8s-version-618033 kubelet[655]: E0120 12:29:17.597227 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.444659 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:18 old-k8s-version-618033 kubelet[655]: E0120 12:29:18.596696 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.444844 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:30 old-k8s-version-618033 kubelet[655]: E0120 12:29:30.596660 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.445435 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:33 old-k8s-version-618033 kubelet[655]: E0120 12:29:33.299251 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.445773 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:38 old-k8s-version-618033 kubelet[655]: E0120 12:29:38.673765 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.445961 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:42 old-k8s-version-618033 kubelet[655]: E0120 12:29:42.596621 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.446294 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:49 old-k8s-version-618033 kubelet[655]: E0120 12:29:49.596280 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.446482 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:57 old-k8s-version-618033 kubelet[655]: E0120 12:29:57.598023 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.446813 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:03 old-k8s-version-618033 kubelet[655]: E0120 12:30:03.596329 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.446998 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:10 old-k8s-version-618033 kubelet[655]: E0120 12:30:10.596520 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.447326 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:14 old-k8s-version-618033 kubelet[655]: E0120 12:30:14.596119 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.449780 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:22 old-k8s-version-618033 kubelet[655]: E0120 12:30:22.605228 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.450110 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:28 old-k8s-version-618033 kubelet[655]: E0120 12:30:28.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.450297 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:36 old-k8s-version-618033 kubelet[655]: E0120 12:30:36.596791 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.450632 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:41 old-k8s-version-618033 kubelet[655]: E0120 12:30:41.596271 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.450817 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:49 old-k8s-version-618033 kubelet[655]: E0120 12:30:49.600904 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.451412 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:54 old-k8s-version-618033 kubelet[655]: E0120 12:30:54.524938 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.451745 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:58 old-k8s-version-618033 kubelet[655]: E0120 12:30:58.673184 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.451930 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:00 old-k8s-version-618033 kubelet[655]: E0120 12:31:00.596643 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.452114 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:12 old-k8s-version-618033 kubelet[655]: E0120 12:31:12.596594 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.452442 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:13 old-k8s-version-618033 kubelet[655]: E0120 12:31:13.596456 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.452627 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:24 old-k8s-version-618033 kubelet[655]: E0120 12:31:24.596574 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.452954 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:25 old-k8s-version-618033 kubelet[655]: E0120 12:31:25.596263 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.453138 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.453469 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.453659 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.453990 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.454174 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.454503 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.454690 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.455019 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.455203 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.455532 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.455716 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.456045 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.456231 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 12:32:55.456240 663170 logs.go:123] Gathering logs for coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] ...
I0120 12:32:55.456257 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
I0120 12:32:55.498655 663170 logs.go:123] Gathering logs for kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] ...
I0120 12:32:55.498685 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:32:55.545339 663170 logs.go:123] Gathering logs for kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] ...
I0120 12:32:55.545367 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
I0120 12:32:55.695497 663170 logs.go:123] Gathering logs for kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] ...
I0120 12:32:55.695578 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:32:55.790895 663170 logs.go:123] Gathering logs for dmesg ...
I0120 12:32:55.790932 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 12:32:55.808465 663170 logs.go:123] Gathering logs for kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] ...
I0120 12:32:55.808496 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
I0120 12:32:55.866823 663170 logs.go:123] Gathering logs for kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] ...
I0120 12:32:55.866858 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:32:55.996274 663170 logs.go:123] Gathering logs for kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] ...
I0120 12:32:55.996312 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
I0120 12:32:56.059035 663170 logs.go:123] Gathering logs for storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] ...
I0120 12:32:56.059067 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
I0120 12:32:56.108806 663170 logs.go:123] Gathering logs for etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] ...
I0120 12:32:56.108854 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
I0120 12:32:56.180797 663170 logs.go:123] Gathering logs for kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] ...
I0120 12:32:56.180898 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
I0120 12:32:56.249831 663170 logs.go:123] Gathering logs for storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] ...
I0120 12:32:56.249864 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
I0120 12:32:56.297821 663170 logs.go:123] Gathering logs for kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] ...
I0120 12:32:56.297851 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
I0120 12:32:56.353347 663170 logs.go:123] Gathering logs for container status ...
I0120 12:32:56.353381 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 12:32:56.414819 663170 logs.go:123] Gathering logs for describe nodes ...
I0120 12:32:56.414848 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 12:32:56.561358 663170 logs.go:123] Gathering logs for etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] ...
I0120 12:32:56.561390 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:32:56.626001 663170 logs.go:123] Gathering logs for coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] ...
I0120 12:32:56.626092 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:32:56.674576 663170 logs.go:123] Gathering logs for kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] ...
I0120 12:32:56.674668 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:32:56.731078 663170 logs.go:123] Gathering logs for kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] ...
I0120 12:32:56.731162 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
I0120 12:32:56.784777 663170 logs.go:123] Gathering logs for kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] ...
I0120 12:32:56.784856 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:32:56.839707 663170 logs.go:123] Gathering logs for containerd ...
I0120 12:32:56.839793 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 12:32:56.911951 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:56.911990 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 12:32:56.912046 663170 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0120 12:32:56.912063 663170 out.go:270] Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:56.912071 663170 out.go:270] Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:56.912084 663170 out.go:270] Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:56.912099 663170 out.go:270] Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:56.912124 663170 out.go:270] Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 12:32:56.912129 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:56.912136 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:33:06.913477 663170 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0120 12:33:06.924185 663170 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0120 12:33:06.927401 663170 out.go:201]
W0120 12:33:06.930237 663170 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0120 12:33:06.930282 663170 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0120 12:33:06.930305 663170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0120 12:33:06.930314 663170 out.go:270] *
*
W0120 12:33:06.931223 663170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0120 12:33:06.933295 663170 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-618033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-618033
helpers_test.go:235: (dbg) docker inspect old-k8s-version-618033:
-- stdout --
[
{
"Id": "ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889",
"Created": "2025-01-20T12:23:38.412020885Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 663372,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-01-20T12:26:52.864273494Z",
"FinishedAt": "2025-01-20T12:26:51.948457928Z"
},
"Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
"ResolvConfPath": "/var/lib/docker/containers/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889/hostname",
"HostsPath": "/var/lib/docker/containers/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889/hosts",
"LogPath": "/var/lib/docker/containers/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889-json.log",
"Name": "/old-k8s-version-618033",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-618033:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-618033",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/bc4daebc41f34d91bb0542c240830738e250c018c88a571a986a3d2ba28de143-init/diff:/var/lib/docker/overlay2/edf43674e048a8839ae0b875f0e8c5a4a292c844ffe81a34a599fd5845eee425/diff",
"MergedDir": "/var/lib/docker/overlay2/bc4daebc41f34d91bb0542c240830738e250c018c88a571a986a3d2ba28de143/merged",
"UpperDir": "/var/lib/docker/overlay2/bc4daebc41f34d91bb0542c240830738e250c018c88a571a986a3d2ba28de143/diff",
"WorkDir": "/var/lib/docker/overlay2/bc4daebc41f34d91bb0542c240830738e250c018c88a571a986a3d2ba28de143/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-618033",
"Source": "/var/lib/docker/volumes/old-k8s-version-618033/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-618033",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-618033",
"name.minikube.sigs.k8s.io": "old-k8s-version-618033",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "0d1406f09bb4fdce3719564352b76862ef42db982dc8c5453eb7eba1af7cecbf",
"SandboxKey": "/var/run/docker/netns/0d1406f09bb4",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33464"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33465"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33468"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33466"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33467"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-618033": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null,
"NetworkID": "75fb228165d452b8040c2a15a4b962bf51901d1bfcb0a4891b16500820d18139",
"EndpointID": "443240ebb9206c877dc78bd4fbcd8b0502dc30d9649e0f07ab64cc2f8b6dccb0",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-618033",
"ec70bc7fcb97"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-618033 -n old-k8s-version-618033
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-618033 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-618033 logs -n 25: (2.148475644s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-152963 | cert-expiration-152963 | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-236901 | force-systemd-env-236901 | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-236901 | force-systemd-env-236901 | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
| start | -p cert-options-753716 | cert-options-753716 | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:23 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-753716 ssh | cert-options-753716 | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:23 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-753716 -- sudo | cert-options-753716 | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:23 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-753716 | cert-options-753716 | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:23 UTC |
| start | -p old-k8s-version-618033 | old-k8s-version-618033 | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:26 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-152963 | cert-expiration-152963 | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:26 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-152963 | cert-expiration-152963 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
| start | -p | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:27 UTC |
| | default-k8s-diff-port-800877 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
| addons | enable metrics-server -p old-k8s-version-618033 | old-k8s-version-618033 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-618033 | old-k8s-version-618033 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-618033 | old-k8s-version-618033 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-618033 | old-k8s-version-618033 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p default-k8s-diff-port-800877 | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
| | default-k8s-diff-port-800877 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-800877 | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:31 UTC |
| | default-k8s-diff-port-800877 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
| image | default-k8s-diff-port-800877 | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
| | image list --format=json | | | | | |
| pause | -p | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
| | default-k8s-diff-port-800877 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
| | default-k8s-diff-port-800877 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
| | default-k8s-diff-port-800877 | | | | | |
| delete | -p | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
| | default-k8s-diff-port-800877 | | | | | |
| start | -p embed-certs-180778 | embed-certs-180778 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/20 12:32:18
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.4 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0120 12:32:18.230102 672840 out.go:345] Setting OutFile to fd 1 ...
I0120 12:32:18.230478 672840 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:32:18.230517 672840 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:18.230549 672840 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:32:18.230822 672840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
I0120 12:32:18.231325 672840 out.go:352] Setting JSON to false
I0120 12:32:18.232359 672840 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8084,"bootTime":1737368255,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0120 12:32:18.232510 672840 start.go:139] virtualization:
I0120 12:32:18.238921 672840 out.go:177] * [embed-certs-180778] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0120 12:32:18.242366 672840 out.go:177] - MINIKUBE_LOCATION=20151
I0120 12:32:18.242443 672840 notify.go:220] Checking for updates...
I0120 12:32:18.248801 672840 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 12:32:18.252051 672840 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
I0120 12:32:18.255101 672840 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
I0120 12:32:18.258199 672840 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0120 12:32:18.261213 672840 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 12:32:18.264742 672840 config.go:182] Loaded profile config "old-k8s-version-618033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0120 12:32:18.264881 672840 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 12:32:18.291471 672840 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
I0120 12:32:18.291597 672840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 12:32:18.348736 672840 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 12:32:18.339174535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 12:32:18.348848 672840 docker.go:318] overlay module found
I0120 12:32:18.352000 672840 out.go:177] * Using the docker driver based on user configuration
I0120 12:32:18.354995 672840 start.go:297] selected driver: docker
I0120 12:32:18.355024 672840 start.go:901] validating driver "docker" against <nil>
I0120 12:32:18.355039 672840 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 12:32:18.355884 672840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0120 12:32:18.431057 672840 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 12:32:18.42084733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0120 12:32:18.431276 672840 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0120 12:32:18.431522 672840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 12:32:18.434534 672840 out.go:177] * Using Docker driver with root privileges
I0120 12:32:18.437570 672840 cni.go:84] Creating CNI manager for ""
I0120 12:32:18.437724 672840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 12:32:18.437735 672840 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0120 12:32:18.437827 672840 start.go:340] cluster config:
{Name:embed-certs-180778 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:32:18.443137 672840 out.go:177] * Starting "embed-certs-180778" primary control-plane node in "embed-certs-180778" cluster
I0120 12:32:18.446116 672840 cache.go:121] Beginning downloading kic base image for docker with containerd
I0120 12:32:18.449059 672840 out.go:177] * Pulling base image v0.0.46 ...
I0120 12:32:18.451934 672840 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 12:32:18.451999 672840 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
I0120 12:32:18.452012 672840 cache.go:56] Caching tarball of preloaded images
I0120 12:32:18.452025 672840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0120 12:32:18.452144 672840 preload.go:172] Found /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0120 12:32:18.452157 672840 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
I0120 12:32:18.452280 672840 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/config.json ...
I0120 12:32:18.452314 672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/config.json: {Name:mk4b172d32fdfc0b2fc3a01d2d2117ddf63ff5ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:18.472642 672840 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0120 12:32:18.472668 672840 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0120 12:32:18.472682 672840 cache.go:227] Successfully downloaded all kic artifacts
I0120 12:32:18.472715 672840 start.go:360] acquireMachinesLock for embed-certs-180778: {Name:mk5e06d24869773ea5a6026455c6dbb830cd62b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:32:18.472824 672840 start.go:364] duration metric: took 87.402µs to acquireMachinesLock for "embed-certs-180778"
I0120 12:32:18.472857 672840 start.go:93] Provisioning new machine with config: &{Name:embed-certs-180778 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 12:32:18.472934 672840 start.go:125] createHost starting for "" (driver="docker")
I0120 12:32:17.673799 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:20.173022 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:22.174529 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:18.476306 672840 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0120 12:32:18.476570 672840 start.go:159] libmachine.API.Create for "embed-certs-180778" (driver="docker")
I0120 12:32:18.476609 672840 client.go:168] LocalClient.Create starting
I0120 12:32:18.476712 672840 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem
I0120 12:32:18.476757 672840 main.go:141] libmachine: Decoding PEM data...
I0120 12:32:18.476779 672840 main.go:141] libmachine: Parsing certificate...
I0120 12:32:18.476836 672840 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem
I0120 12:32:18.476858 672840 main.go:141] libmachine: Decoding PEM data...
I0120 12:32:18.476869 672840 main.go:141] libmachine: Parsing certificate...
I0120 12:32:18.477246 672840 cli_runner.go:164] Run: docker network inspect embed-certs-180778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0120 12:32:18.500724 672840 cli_runner.go:211] docker network inspect embed-certs-180778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0120 12:32:18.500809 672840 network_create.go:284] running [docker network inspect embed-certs-180778] to gather additional debugging logs...
I0120 12:32:18.500836 672840 cli_runner.go:164] Run: docker network inspect embed-certs-180778
W0120 12:32:18.518233 672840 cli_runner.go:211] docker network inspect embed-certs-180778 returned with exit code 1
I0120 12:32:18.518266 672840 network_create.go:287] error running [docker network inspect embed-certs-180778]: docker network inspect embed-certs-180778: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-180778 not found
I0120 12:32:18.518281 672840 network_create.go:289] output of [docker network inspect embed-certs-180778]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-180778 not found
** /stderr **
I0120 12:32:18.518383 672840 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 12:32:18.537730 672840 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ab00e182d66a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a6:06:fc:f6} reservation:<nil>}
I0120 12:32:18.538122 672840 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f204b1132b59 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a7:3b:33} reservation:<nil>}
I0120 12:32:18.538486 672840 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1b8277a01988 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:87:7b:1a:fe} reservation:<nil>}
I0120 12:32:18.539078 672840 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fd00}
I0120 12:32:18.539105 672840 network_create.go:124] attempt to create docker network embed-certs-180778 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0120 12:32:18.539176 672840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-180778 embed-certs-180778
I0120 12:32:18.635541 672840 network_create.go:108] docker network embed-certs-180778 192.168.76.0/24 created
I0120 12:32:18.635575 672840 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-180778" container
I0120 12:32:18.635655 672840 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0120 12:32:18.652648 672840 cli_runner.go:164] Run: docker volume create embed-certs-180778 --label name.minikube.sigs.k8s.io=embed-certs-180778 --label created_by.minikube.sigs.k8s.io=true
I0120 12:32:18.677046 672840 oci.go:103] Successfully created a docker volume embed-certs-180778
I0120 12:32:18.677189 672840 cli_runner.go:164] Run: docker run --rm --name embed-certs-180778-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-180778 --entrypoint /usr/bin/test -v embed-certs-180778:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
I0120 12:32:19.359315 672840 oci.go:107] Successfully prepared a docker volume embed-certs-180778
I0120 12:32:19.359369 672840 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 12:32:19.359391 672840 kic.go:194] Starting extracting preloaded images to volume ...
I0120 12:32:19.359463 672840 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-180778:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
I0120 12:32:24.671226 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:26.671707 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:24.277367 672840 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-180778:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.917864778s)
I0120 12:32:24.277400 672840 kic.go:203] duration metric: took 4.918005792s to extract preloaded images to volume ...
W0120 12:32:24.277541 672840 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0120 12:32:24.277681 672840 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0120 12:32:24.334120 672840 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-180778 --name embed-certs-180778 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-180778 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-180778 --network embed-certs-180778 --ip 192.168.76.2 --volume embed-certs-180778:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
I0120 12:32:24.728856 672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Running}}
I0120 12:32:24.749194 672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
I0120 12:32:24.779611 672840 cli_runner.go:164] Run: docker exec embed-certs-180778 stat /var/lib/dpkg/alternatives/iptables
I0120 12:32:24.835782 672840 oci.go:144] the created container "embed-certs-180778" has a running status.
I0120 12:32:24.835812 672840 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa...
I0120 12:32:25.283807 672840 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0120 12:32:25.331387 672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
I0120 12:32:25.354013 672840 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0120 12:32:25.354033 672840 kic_runner.go:114] Args: [docker exec --privileged embed-certs-180778 chown docker:docker /home/docker/.ssh/authorized_keys]
I0120 12:32:25.419570 672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
I0120 12:32:25.464748 672840 machine.go:93] provisionDockerMachine start ...
I0120 12:32:25.464852 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:25.493057 672840 main.go:141] libmachine: Using SSH client type: native
I0120 12:32:25.493383 672840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33474 <nil> <nil>}
I0120 12:32:25.493403 672840 main.go:141] libmachine: About to run SSH command:
hostname
I0120 12:32:25.494070 672840 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36120->127.0.0.1:33474: read: connection reset by peer
I0120 12:32:28.620940 672840 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-180778
I0120 12:32:28.620963 672840 ubuntu.go:169] provisioning hostname "embed-certs-180778"
I0120 12:32:28.621034 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:28.639238 672840 main.go:141] libmachine: Using SSH client type: native
I0120 12:32:28.639506 672840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33474 <nil> <nil>}
I0120 12:32:28.639525 672840 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-180778 && echo "embed-certs-180778" | sudo tee /etc/hostname
I0120 12:32:28.783653 672840 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-180778
I0120 12:32:28.783791 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:28.803537 672840 main.go:141] libmachine: Using SSH client type: native
I0120 12:32:28.803785 672840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil> [] 0s} 127.0.0.1 33474 <nil> <nil>}
I0120 12:32:28.803802 672840 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-180778' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-180778/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-180778' | sudo tee -a /etc/hosts;
fi
fi
I0120 12:32:28.925771 672840 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 12:32:28.925801 672840 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20151-446459/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-446459/.minikube}
I0120 12:32:28.925822 672840 ubuntu.go:177] setting up certificates
I0120 12:32:28.925835 672840 provision.go:84] configureAuth start
I0120 12:32:28.925902 672840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-180778
I0120 12:32:28.944161 672840 provision.go:143] copyHostCerts
I0120 12:32:28.944236 672840 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem, removing ...
I0120 12:32:28.944251 672840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem
I0120 12:32:28.944329 672840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem (1082 bytes)
I0120 12:32:28.944441 672840 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem, removing ...
I0120 12:32:28.944453 672840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem
I0120 12:32:28.944483 672840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem (1123 bytes)
I0120 12:32:28.944551 672840 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem, removing ...
I0120 12:32:28.944564 672840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem
I0120 12:32:28.944594 672840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem (1675 bytes)
I0120 12:32:28.944860 672840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem org=jenkins.embed-certs-180778 san=[127.0.0.1 192.168.76.2 embed-certs-180778 localhost minikube]
I0120 12:32:29.205235 672840 provision.go:177] copyRemoteCerts
I0120 12:32:29.205308 672840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 12:32:29.205380 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:29.223941 672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
I0120 12:32:29.315345 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0120 12:32:29.344056 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0120 12:32:29.369582 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0120 12:32:29.395347 672840 provision.go:87] duration metric: took 469.498258ms to configureAuth
I0120 12:32:29.395375 672840 ubuntu.go:193] setting minikube options for container-runtime
I0120 12:32:29.395570 672840 config.go:182] Loaded profile config "embed-certs-180778": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:32:29.395577 672840 machine.go:96] duration metric: took 3.930805644s to provisionDockerMachine
I0120 12:32:29.395583 672840 client.go:171] duration metric: took 10.918962629s to LocalClient.Create
I0120 12:32:29.395597 672840 start.go:167] duration metric: took 10.919028614s to libmachine.API.Create "embed-certs-180778"
I0120 12:32:29.395604 672840 start.go:293] postStartSetup for "embed-certs-180778" (driver="docker")
I0120 12:32:29.395613 672840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 12:32:29.395663 672840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 12:32:29.395708 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:29.412875 672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
I0120 12:32:29.507688 672840 ssh_runner.go:195] Run: cat /etc/os-release
I0120 12:32:29.511275 672840 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 12:32:29.511317 672840 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 12:32:29.511329 672840 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 12:32:29.511337 672840 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0120 12:32:29.511348 672840 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-446459/.minikube/addons for local assets ...
I0120 12:32:29.511414 672840 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-446459/.minikube/files for local assets ...
I0120 12:32:29.511508 672840 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem -> 4518352.pem in /etc/ssl/certs
I0120 12:32:29.511623 672840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 12:32:29.521003 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem --> /etc/ssl/certs/4518352.pem (1708 bytes)
I0120 12:32:29.548519 672840 start.go:296] duration metric: took 152.900885ms for postStartSetup
I0120 12:32:29.548899 672840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-180778
I0120 12:32:29.568351 672840 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/config.json ...
I0120 12:32:29.568653 672840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 12:32:29.568736 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:29.594166 672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
I0120 12:32:29.687019 672840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0120 12:32:29.691619 672840 start.go:128] duration metric: took 11.218670352s to createHost
I0120 12:32:29.691643 672840 start.go:83] releasing machines lock for "embed-certs-180778", held for 11.218804737s
I0120 12:32:29.691715 672840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-180778
I0120 12:32:29.709489 672840 ssh_runner.go:195] Run: cat /version.json
I0120 12:32:29.709543 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:29.709853 672840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 12:32:29.709910 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:29.729911 672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
I0120 12:32:29.730384 672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
I0120 12:32:29.970590 672840 ssh_runner.go:195] Run: systemctl --version
I0120 12:32:29.975076 672840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0120 12:32:29.980259 672840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0120 12:32:30.007527 672840 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0120 12:32:30.007611 672840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 12:32:30.096526 672840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0120 12:32:30.096554 672840 start.go:495] detecting cgroup driver to use...
I0120 12:32:30.096587 672840 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0120 12:32:30.096663 672840 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 12:32:30.118294 672840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 12:32:30.133878 672840 docker.go:217] disabling cri-docker service (if available) ...
I0120 12:32:30.134004 672840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 12:32:30.151267 672840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 12:32:30.176215 672840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 12:32:30.283238 672840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 12:32:30.392964 672840 docker.go:233] disabling docker service ...
I0120 12:32:30.393089 672840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 12:32:30.416819 672840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 12:32:30.429232 672840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 12:32:30.525173 672840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 12:32:30.631168 672840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 12:32:30.643963 672840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 12:32:30.661241 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0120 12:32:30.678474 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 12:32:30.689781 672840 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 12:32:30.689900 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 12:32:30.701249 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 12:32:30.712146 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 12:32:30.723901 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 12:32:30.737958 672840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 12:32:30.748263 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 12:32:30.759698 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0120 12:32:30.771547 672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0120 12:32:30.781827 672840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 12:32:30.791499 672840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 12:32:30.800701 672840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:32:30.883129 672840 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 12:32:31.019385 672840 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 12:32:31.019484 672840 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 12:32:31.023822 672840 start.go:563] Will wait 60s for crictl version
I0120 12:32:31.023922 672840 ssh_runner.go:195] Run: which crictl
I0120 12:32:31.027757 672840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 12:32:31.065859 672840 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0120 12:32:31.066031 672840 ssh_runner.go:195] Run: containerd --version
I0120 12:32:31.096215 672840 ssh_runner.go:195] Run: containerd --version
I0120 12:32:31.125478 672840 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.24 ...
I0120 12:32:28.676084 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:31.173025 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:31.128610 672840 cli_runner.go:164] Run: docker network inspect embed-certs-180778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 12:32:31.145492 672840 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0120 12:32:31.149941 672840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 12:32:31.161236 672840 kubeadm.go:883] updating cluster {Name:embed-certs-180778 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 12:32:31.161363 672840 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 12:32:31.161429 672840 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:32:31.209673 672840 containerd.go:627] all images are preloaded for containerd runtime.
I0120 12:32:31.209700 672840 containerd.go:534] Images already preloaded, skipping extraction
I0120 12:32:31.209767 672840 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:32:31.248848 672840 containerd.go:627] all images are preloaded for containerd runtime.
I0120 12:32:31.248872 672840 cache_images.go:84] Images are preloaded, skipping loading
I0120 12:32:31.248881 672840 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 containerd true true} ...
I0120 12:32:31.248974 672840 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-180778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 12:32:31.249044 672840 ssh_runner.go:195] Run: sudo crictl info
I0120 12:32:31.288426 672840 cni.go:84] Creating CNI manager for ""
I0120 12:32:31.288452 672840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 12:32:31.288464 672840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 12:32:31.288488 672840 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-180778 NodeName:embed-certs-180778 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0120 12:32:31.288608 672840 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-180778"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 12:32:31.288683 672840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I0120 12:32:31.298015 672840 binaries.go:44] Found k8s binaries, skipping transfer
I0120 12:32:31.298087 672840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 12:32:31.306816 672840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0120 12:32:31.324959 672840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 12:32:31.343351 672840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0120 12:32:31.361358 672840 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0120 12:32:31.364914 672840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 12:32:31.376249 672840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:32:31.480080 672840 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 12:32:31.496199 672840 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778 for IP: 192.168.76.2
I0120 12:32:31.496264 672840 certs.go:194] generating shared ca certs ...
I0120 12:32:31.496296 672840 certs.go:226] acquiring lock for ca certs: {Name:mkcccec907119c13813a959b3b756156d7101c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:31.496481 672840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-446459/.minikube/ca.key
I0120 12:32:31.496532 672840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.key
I0120 12:32:31.496544 672840 certs.go:256] generating profile certs ...
I0120 12:32:31.496602 672840 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.key
I0120 12:32:31.496627 672840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.crt with IP's: []
I0120 12:32:31.861389 672840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.crt ...
I0120 12:32:31.861422 672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.crt: {Name:mk66dcfeb372e631d7af648df9273c43dd55d4cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:31.861661 672840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.key ...
I0120 12:32:31.861677 672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.key: {Name:mkd48616a77e5a2dfc13cfc3ddf4fd58bd4a6424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:31.861776 672840 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key.4fcf774e
I0120 12:32:31.861795 672840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt.4fcf774e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I0120 12:32:32.923712 672840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt.4fcf774e ...
I0120 12:32:32.923820 672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt.4fcf774e: {Name:mk0b55627c76bb7f573a3e475c691c515fb20aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:32.924064 672840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key.4fcf774e ...
I0120 12:32:32.924103 672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key.4fcf774e: {Name:mk54f28b4d8e13dea001f14acd44dce2ad52e1d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:32.924277 672840 certs.go:381] copying /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt.4fcf774e -> /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt
I0120 12:32:32.924413 672840 certs.go:385] copying /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key.4fcf774e -> /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key
I0120 12:32:32.924531 672840 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.key
I0120 12:32:32.924574 672840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.crt with IP's: []
I0120 12:32:33.463775 672840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.crt ...
I0120 12:32:33.463815 672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.crt: {Name:mk877d8de6f8929e80c2eea656c7efdb436d8404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:33.464724 672840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.key ...
I0120 12:32:33.464744 672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.key: {Name:mk08b2b89218b1d60bc83a4123c18929c147093c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:33.464965 672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835.pem (1338 bytes)
W0120 12:32:33.465016 672840 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835_empty.pem, impossibly tiny 0 bytes
I0120 12:32:33.465029 672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem (1675 bytes)
I0120 12:32:33.465066 672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem (1082 bytes)
I0120 12:32:33.465100 672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem (1123 bytes)
I0120 12:32:33.465132 672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem (1675 bytes)
I0120 12:32:33.465183 672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem (1708 bytes)
I0120 12:32:33.465864 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 12:32:33.492047 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 12:32:33.518007 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 12:32:33.545473 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 12:32:33.572297 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0120 12:32:33.601752 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0120 12:32:33.627864 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 12:32:33.653053 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 12:32:33.680409 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 12:32:33.706251 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835.pem --> /usr/share/ca-certificates/451835.pem (1338 bytes)
I0120 12:32:33.731192 672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem --> /usr/share/ca-certificates/4518352.pem (1708 bytes)
I0120 12:32:33.756251 672840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 12:32:33.774353 672840 ssh_runner.go:195] Run: openssl version
I0120 12:32:33.781401 672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4518352.pem && ln -fs /usr/share/ca-certificates/4518352.pem /etc/ssl/certs/4518352.pem"
I0120 12:32:33.791457 672840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4518352.pem
I0120 12:32:33.795020 672840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:45 /usr/share/ca-certificates/4518352.pem
I0120 12:32:33.795081 672840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4518352.pem
I0120 12:32:33.802142 672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4518352.pem /etc/ssl/certs/3ec20f2e.0"
I0120 12:32:33.812609 672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 12:32:33.822137 672840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 12:32:33.827763 672840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:38 /usr/share/ca-certificates/minikubeCA.pem
I0120 12:32:33.827835 672840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 12:32:33.835901 672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 12:32:33.847755 672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/451835.pem && ln -fs /usr/share/ca-certificates/451835.pem /etc/ssl/certs/451835.pem"
I0120 12:32:33.858687 672840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/451835.pem
I0120 12:32:33.862899 672840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:45 /usr/share/ca-certificates/451835.pem
I0120 12:32:33.862968 672840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/451835.pem
I0120 12:32:33.871401 672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/451835.pem /etc/ssl/certs/51391683.0"
I0120 12:32:33.882229 672840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 12:32:33.888186 672840 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0120 12:32:33.888283 672840 kubeadm.go:392] StartCluster: {Name:embed-certs-180778 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:32:33.888376 672840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 12:32:33.888434 672840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 12:32:33.929525 672840 cri.go:89] found id: ""
I0120 12:32:33.929657 672840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 12:32:33.939061 672840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 12:32:33.948035 672840 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0120 12:32:33.948156 672840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 12:32:33.957387 672840 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 12:32:33.957409 672840 kubeadm.go:157] found existing configuration files:
I0120 12:32:33.957480 672840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 12:32:33.966497 672840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 12:32:33.966562 672840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 12:32:33.975532 672840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 12:32:33.984647 672840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 12:32:33.984756 672840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 12:32:33.993712 672840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 12:32:34.002924 672840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 12:32:34.002994 672840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 12:32:34.015278 672840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 12:32:34.025440 672840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 12:32:34.025545 672840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 12:32:34.035634 672840 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0120 12:32:34.082588 672840 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
I0120 12:32:34.082769 672840 kubeadm.go:310] [preflight] Running pre-flight checks
I0120 12:32:34.110149 672840 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0120 12:32:34.110309 672840 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1075-aws[0m
I0120 12:32:34.110388 672840 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0120 12:32:34.110464 672840 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0120 12:32:34.110545 672840 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0120 12:32:34.110628 672840 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0120 12:32:34.110705 672840 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0120 12:32:34.110781 672840 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0120 12:32:34.110862 672840 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0120 12:32:34.110938 672840 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0120 12:32:34.111018 672840 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0120 12:32:34.111093 672840 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0120 12:32:34.181795 672840 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0120 12:32:34.181967 672840 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0120 12:32:34.182097 672840 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0120 12:32:34.188631 672840 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0120 12:32:33.175082 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:35.672357 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:34.192837 672840 out.go:235] - Generating certificates and keys ...
I0120 12:32:34.192949 672840 kubeadm.go:310] [certs] Using existing ca certificate authority
I0120 12:32:34.193023 672840 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0120 12:32:34.459338 672840 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0120 12:32:35.343439 672840 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0120 12:32:35.789919 672840 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0120 12:32:36.264291 672840 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0120 12:32:37.427984 672840 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0120 12:32:37.428391 672840 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-180778 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0120 12:32:38.116148 672840 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0120 12:32:38.116473 672840 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-180778 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0120 12:32:38.172501 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:40.172959 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:42.174961 663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
I0120 12:32:39.001275 672840 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0120 12:32:40.122022 672840 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0120 12:32:40.477529 672840 kubeadm.go:310] [certs] Generating "sa" key and public key
I0120 12:32:40.477849 672840 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0120 12:32:40.949137 672840 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0120 12:32:41.285314 672840 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0120 12:32:41.578848 672840 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0120 12:32:42.240152 672840 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0120 12:32:43.021708 672840 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0120 12:32:43.022886 672840 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0120 12:32:43.026134 672840 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0120 12:32:43.029359 672840 out.go:235] - Booting up control plane ...
I0120 12:32:43.029462 672840 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0120 12:32:43.029539 672840 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0120 12:32:43.030911 672840 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0120 12:32:43.047099 672840 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0120 12:32:43.054424 672840 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0120 12:32:43.054496 672840 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0120 12:32:43.187520 672840 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0120 12:32:43.187647 672840 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0120 12:32:42.665891 663170 pod_ready.go:82] duration metric: took 4m0.000999177s for pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace to be "Ready" ...
E0120 12:32:42.665923 663170 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 12:32:42.665934 663170 pod_ready.go:39] duration metric: took 5m25.307823459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:32:42.665953 663170 api_server.go:52] waiting for apiserver process to appear ...
I0120 12:32:42.665985 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 12:32:42.666060 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 12:32:42.761425 663170 cri.go:89] found id: "5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
I0120 12:32:42.761457 663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:32:42.761464 663170 cri.go:89] found id: ""
I0120 12:32:42.761472 663170 logs.go:282] 2 containers: [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15]
I0120 12:32:42.761530 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.766334 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.770402 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 12:32:42.770477 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 12:32:42.840870 663170 cri.go:89] found id: "d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
I0120 12:32:42.840890 663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:32:42.840895 663170 cri.go:89] found id: ""
I0120 12:32:42.840902 663170 logs.go:282] 2 containers: [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf]
I0120 12:32:42.840959 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.846031 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.850194 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 12:32:42.850260 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 12:32:42.904928 663170 cri.go:89] found id: "b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
I0120 12:32:42.904957 663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:32:42.904963 663170 cri.go:89] found id: ""
I0120 12:32:42.904970 663170 logs.go:282] 2 containers: [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075]
I0120 12:32:42.905025 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.909172 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.912704 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 12:32:42.912772 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 12:32:42.968944 663170 cri.go:89] found id: "d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
I0120 12:32:42.969015 663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:32:42.969035 663170 cri.go:89] found id: ""
I0120 12:32:42.969061 663170 logs.go:282] 2 containers: [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0]
I0120 12:32:42.969168 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.973579 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:42.978112 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 12:32:42.978252 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 12:32:43.050120 663170 cri.go:89] found id: "3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
I0120 12:32:43.050196 663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:32:43.050216 663170 cri.go:89] found id: ""
I0120 12:32:43.050241 663170 logs.go:282] 2 containers: [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03]
I0120 12:32:43.050338 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.054664 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.058589 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 12:32:43.058720 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 12:32:43.117777 663170 cri.go:89] found id: "beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
I0120 12:32:43.117802 663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:32:43.117807 663170 cri.go:89] found id: ""
I0120 12:32:43.117814 663170 logs.go:282] 2 containers: [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2]
I0120 12:32:43.117901 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.126390 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.136897 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 12:32:43.137072 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 12:32:43.200437 663170 cri.go:89] found id: "a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
I0120 12:32:43.200515 663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:32:43.200538 663170 cri.go:89] found id: ""
I0120 12:32:43.200565 663170 logs.go:282] 2 containers: [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10]
I0120 12:32:43.200662 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.204950 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.208929 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 12:32:43.209037 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 12:32:43.259134 663170 cri.go:89] found id: "d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
I0120 12:32:43.259192 663170 cri.go:89] found id: ""
I0120 12:32:43.259224 663170 logs.go:282] 1 containers: [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6]
I0120 12:32:43.259308 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.263374 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 12:32:43.263497 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 12:32:43.311336 663170 cri.go:89] found id: "2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
I0120 12:32:43.311398 663170 cri.go:89] found id: "fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
I0120 12:32:43.311427 663170 cri.go:89] found id: ""
I0120 12:32:43.311452 663170 logs.go:282] 2 containers: [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224]
I0120 12:32:43.311549 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.315630 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:43.319342 663170 logs.go:123] Gathering logs for kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] ...
I0120 12:32:43.319422 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
I0120 12:32:43.372921 663170 logs.go:123] Gathering logs for storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] ...
I0120 12:32:43.373003 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
I0120 12:32:43.427917 663170 logs.go:123] Gathering logs for containerd ...
I0120 12:32:43.427995 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 12:32:43.498070 663170 logs.go:123] Gathering logs for container status ...
I0120 12:32:43.498147 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 12:32:43.571418 663170 logs.go:123] Gathering logs for kubelet ...
I0120 12:32:43.571498 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 12:32:43.636420 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.205553 655 reflector.go:138] object-"kube-system"/"coredns-token-brbgd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-brbgd" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.636782 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.218010 655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.637034 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.305944 655 reflector.go:138] object-"kube-system"/"metrics-server-token-t7n5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t7n5d" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.637271 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306033 655 reflector.go:138] object-"kube-system"/"kindnet-token-htldq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-htldq" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.637517 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306082 655 reflector.go:138] object-"kube-system"/"kube-proxy-token-85wbm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85wbm" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.637864 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306131 655 reflector.go:138] object-"default"/"default-token-pngw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pngw5" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.638104 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306180 655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.638355 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306224 655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fgdsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fgdsf" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:43.646625 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.721324 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.646855 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.754709 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.650337 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:35 old-k8s-version-618033 kubelet[655]: E0120 12:27:35.610171 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.652382 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.614294 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.652979 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.976430 655 pod_workers.go:191] Error syncing pod 7614f8ae-aae6-4203-96ff-40a900278cf6 ("storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"
W0120 12:32:43.653464 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:50 old-k8s-version-618033 kubelet[655]: E0120 12:27:50.989213 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.653826 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:51 old-k8s-version-618033 kubelet[655]: E0120 12:27:51.992870 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.654514 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:58 old-k8s-version-618033 kubelet[655]: E0120 12:27:58.673483 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.656989 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:00 old-k8s-version-618033 kubelet[655]: E0120 12:28:00.612213 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.657747 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:12 old-k8s-version-618033 kubelet[655]: E0120 12:28:12.084652 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.657954 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:13 old-k8s-version-618033 kubelet[655]: E0120 12:28:13.597787 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.658300 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:18 old-k8s-version-618033 kubelet[655]: E0120 12:28:18.673267 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.658511 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:26 old-k8s-version-618033 kubelet[655]: E0120 12:28:26.596529 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.658872 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:30 old-k8s-version-618033 kubelet[655]: E0120 12:28:30.596223 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.659081 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:40 old-k8s-version-618033 kubelet[655]: E0120 12:28:40.596632 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.659690 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:43 old-k8s-version-618033 kubelet[655]: E0120 12:28:43.165668 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.660040 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:48 old-k8s-version-618033 kubelet[655]: E0120 12:28:48.673706 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.662593 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:51 old-k8s-version-618033 kubelet[655]: E0120 12:28:51.610383 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.663687 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:03 old-k8s-version-618033 kubelet[655]: E0120 12:29:03.602577 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.664062 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:04 old-k8s-version-618033 kubelet[655]: E0120 12:29:04.596213 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.664277 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:17 old-k8s-version-618033 kubelet[655]: E0120 12:29:17.597227 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.664622 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:18 old-k8s-version-618033 kubelet[655]: E0120 12:29:18.596696 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.664827 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:30 old-k8s-version-618033 kubelet[655]: E0120 12:29:30.596660 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.665441 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:33 old-k8s-version-618033 kubelet[655]: E0120 12:29:33.299251 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.665804 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:38 old-k8s-version-618033 kubelet[655]: E0120 12:29:38.673765 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.666010 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:42 old-k8s-version-618033 kubelet[655]: E0120 12:29:42.596621 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.666361 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:49 old-k8s-version-618033 kubelet[655]: E0120 12:29:49.596280 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.666576 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:57 old-k8s-version-618033 kubelet[655]: E0120 12:29:57.598023 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.666934 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:03 old-k8s-version-618033 kubelet[655]: E0120 12:30:03.596329 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.667150 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:10 old-k8s-version-618033 kubelet[655]: E0120 12:30:10.596520 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.667504 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:14 old-k8s-version-618033 kubelet[655]: E0120 12:30:14.596119 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.670002 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:22 old-k8s-version-618033 kubelet[655]: E0120 12:30:22.605228 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:43.670427 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:28 old-k8s-version-618033 kubelet[655]: E0120 12:30:28.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.670640 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:36 old-k8s-version-618033 kubelet[655]: E0120 12:30:36.596791 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.670990 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:41 old-k8s-version-618033 kubelet[655]: E0120 12:30:41.596271 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.671204 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:49 old-k8s-version-618033 kubelet[655]: E0120 12:30:49.600904 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.671841 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:54 old-k8s-version-618033 kubelet[655]: E0120 12:30:54.524938 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.672233 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:58 old-k8s-version-618033 kubelet[655]: E0120 12:30:58.673184 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.672442 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:00 old-k8s-version-618033 kubelet[655]: E0120 12:31:00.596643 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.672687 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:12 old-k8s-version-618033 kubelet[655]: E0120 12:31:12.596594 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.673044 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:13 old-k8s-version-618033 kubelet[655]: E0120 12:31:13.596456 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.673252 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:24 old-k8s-version-618033 kubelet[655]: E0120 12:31:24.596574 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.673646 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:25 old-k8s-version-618033 kubelet[655]: E0120 12:31:25.596263 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.673853 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.674203 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.674413 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.674775 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.674982 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.675330 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.675536 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.675891 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.676100 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:43.676456 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:43.676672 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 12:32:43.676711 663170 logs.go:123] Gathering logs for kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] ...
I0120 12:32:43.676747 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:32:43.743941 663170 logs.go:123] Gathering logs for kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] ...
I0120 12:32:43.743981 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:32:43.806114 663170 logs.go:123] Gathering logs for kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] ...
I0120 12:32:43.806150 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:32:43.857911 663170 logs.go:123] Gathering logs for kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] ...
I0120 12:32:43.857941 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:32:43.917003 663170 logs.go:123] Gathering logs for kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] ...
I0120 12:32:43.917032 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
I0120 12:32:43.992709 663170 logs.go:123] Gathering logs for etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] ...
I0120 12:32:43.992759 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
I0120 12:32:44.068689 663170 logs.go:123] Gathering logs for coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] ...
I0120 12:32:44.068723 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
I0120 12:32:44.123499 663170 logs.go:123] Gathering logs for kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] ...
I0120 12:32:44.123529 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
I0120 12:32:44.181810 663170 logs.go:123] Gathering logs for dmesg ...
I0120 12:32:44.181838 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 12:32:44.204612 663170 logs.go:123] Gathering logs for etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] ...
I0120 12:32:44.204641 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:32:44.262671 663170 logs.go:123] Gathering logs for kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] ...
I0120 12:32:44.262704 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
I0120 12:32:44.313537 663170 logs.go:123] Gathering logs for storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] ...
I0120 12:32:44.313569 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
I0120 12:32:44.385646 663170 logs.go:123] Gathering logs for kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] ...
I0120 12:32:44.385744 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:32:44.474032 663170 logs.go:123] Gathering logs for describe nodes ...
I0120 12:32:44.474111 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 12:32:44.677528 663170 logs.go:123] Gathering logs for coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] ...
I0120 12:32:44.677562 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:32:44.721616 663170 logs.go:123] Gathering logs for kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] ...
I0120 12:32:44.721690 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
I0120 12:32:44.768059 663170 logs.go:123] Gathering logs for kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] ...
I0120 12:32:44.768141 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
I0120 12:32:44.829786 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:44.829821 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 12:32:44.829881 663170 out.go:270] X Problems detected in kubelet:
W0120 12:32:44.829892 663170 out.go:270] Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:44.829917 663170 out.go:270] Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:44.829953 663170 out.go:270] Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:44.829961 663170 out.go:270] Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:44.829967 663170 out.go:270] Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 12:32:44.829972 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:44.829979 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:32:44.687318 672840 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500952428s
I0120 12:32:44.687407 672840 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0120 12:32:51.189382 672840 kubeadm.go:310] [api-check] The API server is healthy after 6.502060656s
I0120 12:32:51.215697 672840 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0120 12:32:51.235012 672840 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0120 12:32:51.263519 672840 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0120 12:32:51.263728 672840 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-180778 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0120 12:32:51.276027 672840 kubeadm.go:310] [bootstrap-token] Using token: vydbii.6x4lt3eagn7amsg9
I0120 12:32:51.278964 672840 out.go:235] - Configuring RBAC rules ...
I0120 12:32:51.279099 672840 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0120 12:32:51.286276 672840 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0120 12:32:51.297527 672840 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0120 12:32:51.301940 672840 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0120 12:32:51.306375 672840 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0120 12:32:51.311523 672840 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0120 12:32:51.606710 672840 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0120 12:32:52.046119 672840 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0120 12:32:52.598023 672840 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0120 12:32:52.599239 672840 kubeadm.go:310]
I0120 12:32:52.599327 672840 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0120 12:32:52.599343 672840 kubeadm.go:310]
I0120 12:32:52.599422 672840 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0120 12:32:52.599432 672840 kubeadm.go:310]
I0120 12:32:52.599458 672840 kubeadm.go:310] mkdir -p $HOME/.kube
I0120 12:32:52.599521 672840 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0120 12:32:52.599577 672840 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0120 12:32:52.599585 672840 kubeadm.go:310]
I0120 12:32:52.599638 672840 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0120 12:32:52.599646 672840 kubeadm.go:310]
I0120 12:32:52.599694 672840 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0120 12:32:52.599702 672840 kubeadm.go:310]
I0120 12:32:52.599754 672840 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0120 12:32:52.599833 672840 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0120 12:32:52.599904 672840 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0120 12:32:52.599916 672840 kubeadm.go:310]
I0120 12:32:52.600000 672840 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0120 12:32:52.600080 672840 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0120 12:32:52.600088 672840 kubeadm.go:310]
I0120 12:32:52.600176 672840 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vydbii.6x4lt3eagn7amsg9 \
I0120 12:32:52.600284 672840 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:cf58d6b4df152431c4946a83dccf7fb472b0285b6e4dd4c00154a1eb2bb479b5 \
I0120 12:32:52.600308 672840 kubeadm.go:310] --control-plane
I0120 12:32:52.600316 672840 kubeadm.go:310]
I0120 12:32:52.600401 672840 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0120 12:32:52.600410 672840 kubeadm.go:310]
I0120 12:32:52.600492 672840 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vydbii.6x4lt3eagn7amsg9 \
I0120 12:32:52.600600 672840 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:cf58d6b4df152431c4946a83dccf7fb472b0285b6e4dd4c00154a1eb2bb479b5
I0120 12:32:52.605496 672840 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0120 12:32:52.605767 672840 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
I0120 12:32:52.605943 672840 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0120 12:32:52.605981 672840 cni.go:84] Creating CNI manager for ""
I0120 12:32:52.605991 672840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0120 12:32:52.609312 672840 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0120 12:32:52.612206 672840 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0120 12:32:52.616083 672840 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
I0120 12:32:52.616104 672840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I0120 12:32:52.636124 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0120 12:32:52.946561 672840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0120 12:32:52.946707 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:52.946789 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-180778 minikube.k8s.io/updated_at=2025_01_20T12_32_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=embed-certs-180778 minikube.k8s.io/primary=true
I0120 12:32:53.119011 672840 ops.go:34] apiserver oom_adj: -16
I0120 12:32:53.119124 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:53.619448 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:54.119835 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:54.619725 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:55.119402 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:55.619208 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:56.119334 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:56.620058 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:57.120025 672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:32:57.339155 672840 kubeadm.go:1113] duration metric: took 4.39249512s to wait for elevateKubeSystemPrivileges
I0120 12:32:57.339183 672840 kubeadm.go:394] duration metric: took 23.450905389s to StartCluster
I0120 12:32:57.339205 672840 settings.go:142] acquiring lock: {Name:mka92edde1befc8914a01871e41167ef1a7b90c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:57.339266 672840 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20151-446459/kubeconfig
I0120 12:32:57.340658 672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/kubeconfig: {Name:mkd202431392e920a92afeece62697072b25ee29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:32:57.340876 672840 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 12:32:57.340959 672840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0120 12:32:57.341194 672840 config.go:182] Loaded profile config "embed-certs-180778": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:32:57.341227 672840 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 12:32:57.341284 672840 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-180778"
I0120 12:32:57.341298 672840 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-180778"
I0120 12:32:57.341319 672840 host.go:66] Checking if "embed-certs-180778" exists ...
I0120 12:32:57.341978 672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
I0120 12:32:57.342482 672840 addons.go:69] Setting default-storageclass=true in profile "embed-certs-180778"
I0120 12:32:57.342502 672840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-180778"
I0120 12:32:57.342796 672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
I0120 12:32:57.346015 672840 out.go:177] * Verifying Kubernetes components...
I0120 12:32:57.354182 672840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:32:57.390410 672840 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 12:32:54.831056 663170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 12:32:54.842999 663170 api_server.go:72] duration metric: took 5m55.169056051s to wait for apiserver process to appear ...
I0120 12:32:54.843025 663170 api_server.go:88] waiting for apiserver healthz status ...
I0120 12:32:54.843060 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 12:32:54.843120 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 12:32:54.892331 663170 cri.go:89] found id: "5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
I0120 12:32:54.892355 663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:32:54.892360 663170 cri.go:89] found id: ""
I0120 12:32:54.892367 663170 logs.go:282] 2 containers: [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15]
I0120 12:32:54.892424 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:54.896167 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:54.899483 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 12:32:54.899551 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 12:32:54.947556 663170 cri.go:89] found id: "d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
I0120 12:32:54.947585 663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:32:54.947591 663170 cri.go:89] found id: ""
I0120 12:32:54.947598 663170 logs.go:282] 2 containers: [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf]
I0120 12:32:54.947656 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:54.951481 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:54.955038 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 12:32:54.955113 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 12:32:54.999061 663170 cri.go:89] found id: "b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
I0120 12:32:54.999094 663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:32:54.999099 663170 cri.go:89] found id: ""
I0120 12:32:54.999106 663170 logs.go:282] 2 containers: [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075]
I0120 12:32:54.999164 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.003398 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.006791 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 12:32:55.006865 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 12:32:55.053724 663170 cri.go:89] found id: "d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
I0120 12:32:55.053750 663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:32:55.053755 663170 cri.go:89] found id: ""
I0120 12:32:55.053763 663170 logs.go:282] 2 containers: [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0]
I0120 12:32:55.053826 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.057957 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.061739 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 12:32:55.061865 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 12:32:55.112602 663170 cri.go:89] found id: "3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
I0120 12:32:55.112625 663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:32:55.112631 663170 cri.go:89] found id: ""
I0120 12:32:55.112638 663170 logs.go:282] 2 containers: [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03]
I0120 12:32:55.112718 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.116611 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.121704 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 12:32:55.121779 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 12:32:55.181387 663170 cri.go:89] found id: "beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
I0120 12:32:55.181409 663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:32:55.181414 663170 cri.go:89] found id: ""
I0120 12:32:55.181421 663170 logs.go:282] 2 containers: [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2]
I0120 12:32:55.181497 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.186863 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.191042 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 12:32:55.191113 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 12:32:55.244409 663170 cri.go:89] found id: "a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
I0120 12:32:55.244442 663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:32:55.244449 663170 cri.go:89] found id: ""
I0120 12:32:55.244456 663170 logs.go:282] 2 containers: [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10]
I0120 12:32:55.244522 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.253198 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.260336 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 12:32:55.260427 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 12:32:55.307825 663170 cri.go:89] found id: "2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
I0120 12:32:55.307847 663170 cri.go:89] found id: "fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
I0120 12:32:55.307851 663170 cri.go:89] found id: ""
I0120 12:32:55.307858 663170 logs.go:282] 2 containers: [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224]
I0120 12:32:55.307925 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.311753 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.315323 663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 12:32:55.315404 663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 12:32:55.356240 663170 cri.go:89] found id: "d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
I0120 12:32:55.356269 663170 cri.go:89] found id: ""
I0120 12:32:55.356277 663170 logs.go:282] 1 containers: [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6]
I0120 12:32:55.356345 663170 ssh_runner.go:195] Run: which crictl
I0120 12:32:55.359958 663170 logs.go:123] Gathering logs for kubelet ...
I0120 12:32:55.359984 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0120 12:32:55.418304 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.205553 655 reflector.go:138] object-"kube-system"/"coredns-token-brbgd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-brbgd" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.418614 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.218010 655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.418849 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.305944 655 reflector.go:138] object-"kube-system"/"metrics-server-token-t7n5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t7n5d" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419071 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306033 655 reflector.go:138] object-"kube-system"/"kindnet-token-htldq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-htldq" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419291 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306082 655 reflector.go:138] object-"kube-system"/"kube-proxy-token-85wbm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85wbm" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419546 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306131 655 reflector.go:138] object-"default"/"default-token-pngw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pngw5" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419756 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306180 655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.419984 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306224 655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fgdsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fgdsf" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
W0120 12:32:55.428109 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.721324 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.428309 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.754709 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.431722 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:35 old-k8s-version-618033 kubelet[655]: E0120 12:27:35.610171 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.433745 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.614294 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.434318 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.976430 655 pod_workers.go:191] Error syncing pod 7614f8ae-aae6-4203-96ff-40a900278cf6 ("storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"
W0120 12:32:55.434787 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:50 old-k8s-version-618033 kubelet[655]: E0120 12:27:50.989213 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.435118 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:51 old-k8s-version-618033 kubelet[655]: E0120 12:27:51.992870 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.435792 663170 logs.go:138] Found kubelet problem: Jan 20 12:27:58 old-k8s-version-618033 kubelet[655]: E0120 12:27:58.673483 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.438245 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:00 old-k8s-version-618033 kubelet[655]: E0120 12:28:00.612213 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.438973 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:12 old-k8s-version-618033 kubelet[655]: E0120 12:28:12.084652 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.439157 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:13 old-k8s-version-618033 kubelet[655]: E0120 12:28:13.597787 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.439485 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:18 old-k8s-version-618033 kubelet[655]: E0120 12:28:18.673267 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.439669 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:26 old-k8s-version-618033 kubelet[655]: E0120 12:28:26.596529 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.439998 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:30 old-k8s-version-618033 kubelet[655]: E0120 12:28:30.596223 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.440181 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:40 old-k8s-version-618033 kubelet[655]: E0120 12:28:40.596632 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.440772 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:43 old-k8s-version-618033 kubelet[655]: E0120 12:28:43.165668 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.441099 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:48 old-k8s-version-618033 kubelet[655]: E0120 12:28:48.673706 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.443603 663170 logs.go:138] Found kubelet problem: Jan 20 12:28:51 old-k8s-version-618033 kubelet[655]: E0120 12:28:51.610383 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.443790 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:03 old-k8s-version-618033 kubelet[655]: E0120 12:29:03.602577 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.444120 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:04 old-k8s-version-618033 kubelet[655]: E0120 12:29:04.596213 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.444327 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:17 old-k8s-version-618033 kubelet[655]: E0120 12:29:17.597227 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.444659 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:18 old-k8s-version-618033 kubelet[655]: E0120 12:29:18.596696 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.444844 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:30 old-k8s-version-618033 kubelet[655]: E0120 12:29:30.596660 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.445435 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:33 old-k8s-version-618033 kubelet[655]: E0120 12:29:33.299251 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.445773 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:38 old-k8s-version-618033 kubelet[655]: E0120 12:29:38.673765 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.445961 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:42 old-k8s-version-618033 kubelet[655]: E0120 12:29:42.596621 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.446294 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:49 old-k8s-version-618033 kubelet[655]: E0120 12:29:49.596280 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.446482 663170 logs.go:138] Found kubelet problem: Jan 20 12:29:57 old-k8s-version-618033 kubelet[655]: E0120 12:29:57.598023 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.446813 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:03 old-k8s-version-618033 kubelet[655]: E0120 12:30:03.596329 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.446998 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:10 old-k8s-version-618033 kubelet[655]: E0120 12:30:10.596520 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.447326 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:14 old-k8s-version-618033 kubelet[655]: E0120 12:30:14.596119 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.449780 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:22 old-k8s-version-618033 kubelet[655]: E0120 12:30:22.605228 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0120 12:32:55.450110 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:28 old-k8s-version-618033 kubelet[655]: E0120 12:30:28.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.450297 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:36 old-k8s-version-618033 kubelet[655]: E0120 12:30:36.596791 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.450632 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:41 old-k8s-version-618033 kubelet[655]: E0120 12:30:41.596271 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.450817 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:49 old-k8s-version-618033 kubelet[655]: E0120 12:30:49.600904 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.451412 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:54 old-k8s-version-618033 kubelet[655]: E0120 12:30:54.524938 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.451745 663170 logs.go:138] Found kubelet problem: Jan 20 12:30:58 old-k8s-version-618033 kubelet[655]: E0120 12:30:58.673184 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.451930 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:00 old-k8s-version-618033 kubelet[655]: E0120 12:31:00.596643 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.452114 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:12 old-k8s-version-618033 kubelet[655]: E0120 12:31:12.596594 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.452442 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:13 old-k8s-version-618033 kubelet[655]: E0120 12:31:13.596456 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.452627 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:24 old-k8s-version-618033 kubelet[655]: E0120 12:31:24.596574 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.452954 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:25 old-k8s-version-618033 kubelet[655]: E0120 12:31:25.596263 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.453138 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.453469 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.453659 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.453990 663170 logs.go:138] Found kubelet problem: Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.454174 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.454503 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.454690 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.455019 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.455203 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.455532 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.455716 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:55.456045 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:55.456231 663170 logs.go:138] Found kubelet problem: Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 12:32:55.456240 663170 logs.go:123] Gathering logs for coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] ...
I0120 12:32:55.456257 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
I0120 12:32:55.498655 663170 logs.go:123] Gathering logs for kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] ...
I0120 12:32:55.498685 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
I0120 12:32:55.545339 663170 logs.go:123] Gathering logs for kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] ...
I0120 12:32:55.545367 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
I0120 12:32:55.695497 663170 logs.go:123] Gathering logs for kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] ...
I0120 12:32:55.695578 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
I0120 12:32:55.790895 663170 logs.go:123] Gathering logs for dmesg ...
I0120 12:32:55.790932 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 12:32:55.808465 663170 logs.go:123] Gathering logs for kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] ...
I0120 12:32:55.808496 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
I0120 12:32:55.866823 663170 logs.go:123] Gathering logs for kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] ...
I0120 12:32:55.866858 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
I0120 12:32:55.996274 663170 logs.go:123] Gathering logs for kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] ...
I0120 12:32:55.996312 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
I0120 12:32:56.059035 663170 logs.go:123] Gathering logs for storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] ...
I0120 12:32:56.059067 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
I0120 12:32:56.108806 663170 logs.go:123] Gathering logs for etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] ...
I0120 12:32:56.108854 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
I0120 12:32:56.180797 663170 logs.go:123] Gathering logs for kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] ...
I0120 12:32:56.180898 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
I0120 12:32:56.249831 663170 logs.go:123] Gathering logs for storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] ...
I0120 12:32:56.249864 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
I0120 12:32:56.297821 663170 logs.go:123] Gathering logs for kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] ...
I0120 12:32:56.297851 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
I0120 12:32:56.353347 663170 logs.go:123] Gathering logs for container status ...
I0120 12:32:56.353381 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 12:32:56.414819 663170 logs.go:123] Gathering logs for describe nodes ...
I0120 12:32:56.414848 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 12:32:56.561358 663170 logs.go:123] Gathering logs for etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] ...
I0120 12:32:56.561390 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
I0120 12:32:56.626001 663170 logs.go:123] Gathering logs for coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] ...
I0120 12:32:56.626092 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
I0120 12:32:56.674576 663170 logs.go:123] Gathering logs for kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] ...
I0120 12:32:56.674668 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
I0120 12:32:56.731078 663170 logs.go:123] Gathering logs for kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] ...
I0120 12:32:56.731162 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
I0120 12:32:56.784777 663170 logs.go:123] Gathering logs for kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] ...
I0120 12:32:56.784856 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
I0120 12:32:56.839707 663170 logs.go:123] Gathering logs for containerd ...
I0120 12:32:56.839793 663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 12:32:56.911951 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:56.911990 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0120 12:32:56.912046 663170 out.go:270] X Problems detected in kubelet:
W0120 12:32:56.912063 663170 out.go:270] Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:56.912071 663170 out.go:270] Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:56.912084 663170 out.go:270] Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0120 12:32:56.912099 663170 out.go:270] Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
W0120 12:32:56.912124 663170 out.go:270] Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0120 12:32:56.912129 663170 out.go:358] Setting ErrFile to fd 2...
I0120 12:32:56.912136 663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:32:57.392100 672840 addons.go:238] Setting addon default-storageclass=true in "embed-certs-180778"
I0120 12:32:57.392142 672840 host.go:66] Checking if "embed-certs-180778" exists ...
I0120 12:32:57.392568 672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
I0120 12:32:57.397067 672840 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:32:57.397089 672840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 12:32:57.397155 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:57.427311 672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
I0120 12:32:57.434064 672840 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 12:32:57.434092 672840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 12:32:57.434166 672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
I0120 12:32:57.460556 672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
I0120 12:32:57.797407 672840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0120 12:32:57.797553 672840 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 12:32:57.802382 672840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:32:57.862359 672840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 12:32:58.492584 672840 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I0120 12:32:58.495181 672840 node_ready.go:35] waiting up to 6m0s for node "embed-certs-180778" to be "Ready" ...
I0120 12:32:58.518417 672840 node_ready.go:49] node "embed-certs-180778" has status "Ready":"True"
I0120 12:32:58.518447 672840 node_ready.go:38] duration metric: took 23.23269ms for node "embed-certs-180778" to be "Ready" ...
I0120 12:32:58.518459 672840 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:32:58.529809 672840 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2n425" in "kube-system" namespace to be "Ready" ...
I0120 12:32:58.756753 672840 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0120 12:32:58.759666 672840 addons.go:514] duration metric: took 1.418427551s for enable addons: enabled=[storage-provisioner default-storageclass]
I0120 12:32:58.997764 672840 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-180778" context rescaled to 1 replicas
I0120 12:32:59.532719 672840 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-2n425" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-2n425" not found
I0120 12:32:59.532751 672840 pod_ready.go:82] duration metric: took 1.00290741s for pod "coredns-668d6bf9bc-2n425" in "kube-system" namespace to be "Ready" ...
E0120 12:32:59.532764 672840 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-2n425" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-2n425" not found
I0120 12:32:59.532771 672840 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fkxfj" in "kube-system" namespace to be "Ready" ...
I0120 12:33:01.540135 672840 pod_ready.go:103] pod "coredns-668d6bf9bc-fkxfj" in "kube-system" namespace has status "Ready":"False"
I0120 12:33:06.913477 663170 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0120 12:33:06.924185 663170 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0120 12:33:06.927401 663170 out.go:201]
W0120 12:33:06.930237 663170 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0120 12:33:06.930282 663170 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0120 12:33:06.930305 663170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0120 12:33:06.930314 663170 out.go:270] *
W0120 12:33:06.931223 663170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0120 12:33:06.933295 663170 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
b4c7a3b420fff 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 7f3e7ce10ee34 dashboard-metrics-scraper-8d5bb5db8-jmvh6
2dbb0b8040357 ba04bb24b9575 5 minutes ago Running storage-provisioner 3 07ad5812adf0d storage-provisioner
d698a9d5733df 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 7923798dc5940 kubernetes-dashboard-cd95d586-g46zv
3ae3ce774b5dc 25a5233254979 5 minutes ago Running kube-proxy 1 72ce9fced84f4 kube-proxy-q2cdx
b03ba2b22cc03 db91994f4ee8f 5 minutes ago Running coredns 1 1ef3dcbc90b3a coredns-74ff55c5b-vjbl2
fcc769c7e3726 ba04bb24b9575 5 minutes ago Exited storage-provisioner 2 07ad5812adf0d storage-provisioner
7453f2b338621 1611cd07b61d5 5 minutes ago Running busybox 1 64c7edb219399 busybox
a6dfb5f612403 2be0bcf609c65 5 minutes ago Running kindnet-cni 1 1b5b4fb5da9ad kindnet-vjzbq
beff5ecb54dc9 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 fad4792077568 kube-controller-manager-old-k8s-version-618033
d8f6fdcd0e3fb e7605f88f17d6 5 minutes ago Running kube-scheduler 1 54125e5534c82 kube-scheduler-old-k8s-version-618033
5d4812f61b58d 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 50ae1022e6f18 kube-apiserver-old-k8s-version-618033
d0d87daa0a46e 05b738aa1bc63 6 minutes ago Running etcd 1 0a1200b6c9c33 etcd-old-k8s-version-618033
82983b3baf56c 1611cd07b61d5 6 minutes ago Exited busybox 0 dae0724dcb014 busybox
31e7ecd06558c db91994f4ee8f 8 minutes ago Exited coredns 0 3b8ad40fcaba9 coredns-74ff55c5b-vjbl2
2927f71245812 2be0bcf609c65 8 minutes ago Exited kindnet-cni 0 46547b72b9275 kindnet-vjzbq
a14330fd1aa84 25a5233254979 8 minutes ago Exited kube-proxy 0 d11c3ba6a5027 kube-proxy-q2cdx
8950cdd4d5874 1df8a2b116bd1 9 minutes ago Exited kube-controller-manager 0 3977abd27cda1 kube-controller-manager-old-k8s-version-618033
758444c7d1ae5 e7605f88f17d6 9 minutes ago Exited kube-scheduler 0 4f2ca3cd67a7c kube-scheduler-old-k8s-version-618033
4ec4dad53941b 05b738aa1bc63 9 minutes ago Exited etcd 0 5b656e31552bc etcd-old-k8s-version-618033
6a26c537f8dc2 2c08bbbc02d3a 9 minutes ago Exited kube-apiserver 0 a82573898c09f kube-apiserver-old-k8s-version-618033
==> containerd <==
Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.706083575Z" level=info msg="StartContainer for \"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\" returns successfully"
Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.706278677Z" level=info msg="received exit event container_id:\"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\" id:\"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\" pid:3068 exit_status:255 exited_at:{seconds:1737376172 nanos:705149128}"
Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.736834692Z" level=info msg="shim disconnected" id=d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594 namespace=k8s.io
Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.736897642Z" level=warning msg="cleaning up after shim disconnected" id=d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594 namespace=k8s.io
Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.736912239Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 12:29:33 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:33.300767068Z" level=info msg="RemoveContainer for \"83e12fc09c110a83b84d726d73830db941995b77fca31381c9cf5418ad46d446\""
Jan 20 12:29:33 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:33.308560883Z" level=info msg="RemoveContainer for \"83e12fc09c110a83b84d726d73830db941995b77fca31381c9cf5418ad46d446\" returns successfully"
Jan 20 12:30:22 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:22.596997059Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:30:22 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:22.602592657Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Jan 20 12:30:22 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:22.604647825Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jan 20 12:30:22 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:22.604682131Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.599668356Z" level=info msg="CreateContainer within sandbox \"7f3e7ce10ee34f5d04c421ccacc494cbc4d32135d123b9886d5da4f82a54216a\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.620215824Z" level=info msg="CreateContainer within sandbox \"7f3e7ce10ee34f5d04c421ccacc494cbc4d32135d123b9886d5da4f82a54216a\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\""
Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.621038164Z" level=info msg="StartContainer for \"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\""
Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.691919954Z" level=info msg="StartContainer for \"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\" returns successfully"
Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.692076747Z" level=info msg="received exit event container_id:\"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\" id:\"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\" pid:3299 exit_status:255 exited_at:{seconds:1737376253 nanos:691148421}"
Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.718036139Z" level=info msg="shim disconnected" id=b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a namespace=k8s.io
Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.718100903Z" level=warning msg="cleaning up after shim disconnected" id=b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a namespace=k8s.io
Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.718112316Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 12:30:54 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:54.529791856Z" level=info msg="RemoveContainer for \"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\""
Jan 20 12:30:54 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:54.536126848Z" level=info msg="RemoveContainer for \"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\" returns successfully"
Jan 20 12:33:05 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:33:05.597564332Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:33:05 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:33:05.605004907Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Jan 20 12:33:05 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:33:05.607103358Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jan 20 12:33:05 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:33:05.607108585Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:51384 - 38234 "HINFO IN 4655488605509180788.6412446353322546485. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033651761s
==> coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] <==
I0120 12:27:50.078581 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 12:27:20.077163535 +0000 UTC m=+0.025399956) (total time: 30.001216086s):
Trace[2019727887]: [30.001216086s] [30.001216086s] END
E0120 12:27:50.078819 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0120 12:27:50.079062 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 12:27:20.077242214 +0000 UTC m=+0.025478627) (total time: 30.001804636s):
Trace[939984059]: [30.001804636s] [30.001804636s] END
E0120 12:27:50.079072 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0120 12:27:50.081658 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 12:27:20.077156191 +0000 UTC m=+0.025392612) (total time: 30.004409395s):
Trace[911902081]: [30.004409395s] [30.004409395s] END
E0120 12:27:50.081679 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:42584 - 9585 "HINFO IN 267077671365996973.267405783250503531. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011601665s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
==> describe nodes <==
Name: old-k8s-version-618033
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-618033
kubernetes.io/os=linux
minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
minikube.k8s.io/name=old-k8s-version-618033
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_20T12_24_18_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 20 Jan 2025 12:24:15 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-618033
AcquireTime: <unset>
RenewTime: Mon, 20 Jan 2025 12:33:00 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 20 Jan 2025 12:28:07 +0000 Mon, 20 Jan 2025 12:24:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 20 Jan 2025 12:28:07 +0000 Mon, 20 Jan 2025 12:24:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 20 Jan 2025 12:28:07 +0000 Mon, 20 Jan 2025 12:24:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 20 Jan 2025 12:28:07 +0000 Mon, 20 Jan 2025 12:24:33 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-618033
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022292Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022292Ki
pods: 110
System Info:
Machine ID: 4296a18b2e774369b2694f137a6719b6
System UUID: 71f4dabb-94e8-4097-a5f8-81f5631c4c62
Boot ID: 1cf72276-e5cc-4a75-95c3-e1897ed2b9a5
Kernel Version: 5.15.0-1075-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.24
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m40s
kube-system coredns-74ff55c5b-vjbl2 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m35s
kube-system etcd-old-k8s-version-618033 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m42s
kube-system kindnet-vjzbq 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m35s
kube-system kube-apiserver-old-k8s-version-618033 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m42s
kube-system kube-controller-manager-old-k8s-version-618033 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m42s
kube-system kube-proxy-q2cdx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m35s
kube-system kube-scheduler-old-k8s-version-618033 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m42s
kube-system metrics-server-9975d5f86-h8bg5 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m28s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m34s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-jmvh6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m32s
kubernetes-dashboard kubernetes-dashboard-cd95d586-g46zv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m32s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 9m2s (x4 over 9m2s) kubelet Node old-k8s-version-618033 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m2s (x3 over 9m2s) kubelet Node old-k8s-version-618033 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m2s (x3 over 9m2s) kubelet Node old-k8s-version-618033 status is now: NodeHasSufficientPID
Normal Starting 8m42s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m42s kubelet Node old-k8s-version-618033 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m42s kubelet Node old-k8s-version-618033 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m42s kubelet Node old-k8s-version-618033 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m42s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m35s kubelet Node old-k8s-version-618033 status is now: NodeReady
Normal Starting 8m32s kube-proxy Starting kube-proxy.
Normal Starting 6m1s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m1s (x7 over 6m1s) kubelet Node old-k8s-version-618033 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m1s (x8 over 6m1s) kubelet Node old-k8s-version-618033 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m1s (x8 over 6m1s) kubelet Node old-k8s-version-618033 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m1s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m47s kube-proxy Starting kube-proxy.
==> dmesg <==
[Jan20 11:09] hrtimer: interrupt took 29526498 ns
==> etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] <==
raft2025/01/20 12:24:08 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
raft2025/01/20 12:24:08 INFO: 9f0758e1c58a86ed became leader at term 2
raft2025/01/20 12:24:08 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2025-01-20 12:24:08.327246 I | etcdserver: setting up the initial cluster version to 3.4
2025-01-20 12:24:08.328329 N | etcdserver/membership: set the initial cluster version to 3.4
2025-01-20 12:24:08.328527 I | etcdserver/api: enabled capabilities for version 3.4
2025-01-20 12:24:08.328651 I | etcdserver: published {Name:old-k8s-version-618033 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2025-01-20 12:24:08.328738 I | embed: ready to serve client requests
2025-01-20 12:24:08.332704 I | embed: serving client requests on 127.0.0.1:2379
2025-01-20 12:24:08.335978 I | embed: ready to serve client requests
2025-01-20 12:24:08.337372 I | embed: serving client requests on 192.168.85.2:2379
2025-01-20 12:24:28.522800 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:24:32.303794 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:24:42.303850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:24:52.303845 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:25:02.303930 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:25:12.303716 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:25:22.303660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:25:32.303768 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:25:42.303892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:25:52.303705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:26:02.303833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:26:12.303725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:26:22.303853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:26:32.303657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] <==
2025-01-20 12:29:00.704675 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:29:10.704594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:29:20.704621 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:29:30.704519 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:29:40.704491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:29:50.704459 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:30:00.704531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:30:10.704652 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:30:20.704624 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:30:30.704584 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:30:40.704654 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:30:50.704498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:31:00.704601 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:31:10.705517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:31:20.704705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:31:30.704549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:31:40.704641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:31:50.704435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:32:00.704599 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:32:10.704553 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:32:20.704461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:32:30.704644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:32:40.704521 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:32:50.704737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-01-20 12:33:00.704576 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
12:33:08 up 2:15, 0 users, load average: 2.47, 2.11, 2.43
Linux old-k8s-version-618033 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] <==
I0120 12:24:37.623240 1 controller.go:401] Syncing nftables rules
I0120 12:24:47.429736 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:24:47.429780 1 main.go:301] handling current node
I0120 12:24:57.422978 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:24:57.423015 1 main.go:301] handling current node
I0120 12:25:07.422745 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:25:07.422783 1 main.go:301] handling current node
I0120 12:25:17.431957 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:25:17.431991 1 main.go:301] handling current node
I0120 12:25:27.430290 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:25:27.430329 1 main.go:301] handling current node
I0120 12:25:37.423219 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:25:37.423256 1 main.go:301] handling current node
I0120 12:25:47.428106 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:25:47.428141 1 main.go:301] handling current node
I0120 12:25:57.425681 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:25:57.425716 1 main.go:301] handling current node
I0120 12:26:07.422447 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:26:07.422482 1 main.go:301] handling current node
I0120 12:26:17.433666 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:26:17.433710 1 main.go:301] handling current node
I0120 12:26:27.425680 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:26:27.425904 1 main.go:301] handling current node
I0120 12:26:37.422753 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:26:37.422881 1 main.go:301] handling current node
==> kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] <==
I0120 12:30:59.823117 1 main.go:301] handling current node
I0120 12:31:09.829647 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:31:09.829687 1 main.go:301] handling current node
I0120 12:31:19.822575 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:31:19.822611 1 main.go:301] handling current node
I0120 12:31:29.822716 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:31:29.822753 1 main.go:301] handling current node
I0120 12:31:39.830802 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:31:39.830840 1 main.go:301] handling current node
I0120 12:31:49.829699 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:31:49.829730 1 main.go:301] handling current node
I0120 12:31:59.822729 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:31:59.822930 1 main.go:301] handling current node
I0120 12:32:09.830714 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:32:09.830749 1 main.go:301] handling current node
I0120 12:32:19.825664 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:32:19.825942 1 main.go:301] handling current node
I0120 12:32:29.825671 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:32:29.825714 1 main.go:301] handling current node
I0120 12:32:39.829641 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:32:39.829678 1 main.go:301] handling current node
I0120 12:32:49.831452 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:32:49.831493 1 main.go:301] handling current node
I0120 12:32:59.827987 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0120 12:32:59.828027 1 main.go:301] handling current node
==> kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] <==
I0120 12:29:43.589190 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:29:43.589200 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0120 12:30:19.695497 1 handler_proxy.go:102] no RequestInfo found in the context
E0120 12:30:19.695809 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0120 12:30:19.695892 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 12:30:21.203695 1 client.go:360] parsed scheme: "passthrough"
I0120 12:30:21.203746 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:30:21.203755 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 12:31:00.265548 1 client.go:360] parsed scheme: "passthrough"
I0120 12:31:00.265650 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:31:00.265661 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 12:31:41.668682 1 client.go:360] parsed scheme: "passthrough"
I0120 12:31:41.668718 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:31:41.668725 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0120 12:32:18.305295 1 handler_proxy.go:102] no RequestInfo found in the context
E0120 12:32:18.305381 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0120 12:32:18.305399 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 12:32:18.957974 1 client.go:360] parsed scheme: "passthrough"
I0120 12:32:18.958282 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:32:18.958397 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 12:32:57.392152 1 client.go:360] parsed scheme: "passthrough"
I0120 12:32:57.392276 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:32:57.392306 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] <==
I0120 12:24:15.902543 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0120 12:24:15.902575 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0120 12:24:15.915163 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0120 12:24:15.918638 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0120 12:24:15.918660 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0120 12:24:16.479959 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0120 12:24:16.539241 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0120 12:24:16.706895 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
I0120 12:24:16.708205 1 controller.go:606] quota admission added evaluator for: endpoints
I0120 12:24:16.714559 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0120 12:24:17.525479 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0120 12:24:17.998686 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0120 12:24:18.075745 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0120 12:24:26.508453 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0120 12:24:33.510425 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0120 12:24:33.641754 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0120 12:24:50.780452 1 client.go:360] parsed scheme: "passthrough"
I0120 12:24:50.780635 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:24:50.780655 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 12:25:26.182146 1 client.go:360] parsed scheme: "passthrough"
I0120 12:25:26.182192 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:25:26.182202 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0120 12:26:05.047876 1 client.go:360] parsed scheme: "passthrough"
I0120 12:26:05.047935 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0120 12:26:05.047944 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] <==
I0120 12:24:33.554355 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0120 12:24:33.554388 1 shared_informer.go:247] Caches are synced for stateful set
I0120 12:24:33.554401 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0120 12:24:33.575106 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0120 12:24:33.575228 1 shared_informer.go:247] Caches are synced for daemon sets
I0120 12:24:33.577860 1 shared_informer.go:247] Caches are synced for HPA
I0120 12:24:33.590628 1 shared_informer.go:247] Caches are synced for attach detach
I0120 12:24:33.661233 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-618033" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0120 12:24:33.666411 1 range_allocator.go:373] Set node old-k8s-version-618033 PodCIDR to [10.244.0.0/24]
I0120 12:24:33.666738 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2s7v5"
I0120 12:24:33.693686 1 shared_informer.go:247] Caches are synced for resource quota
I0120 12:24:33.724490 1 shared_informer.go:247] Caches are synced for resource quota
I0120 12:24:33.789536 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vjbl2"
E0120 12:24:33.817744 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0120 12:24:33.865929 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q2cdx"
I0120 12:24:33.885071 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0120 12:24:33.956113 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vjzbq"
I0120 12:24:34.130752 1 shared_informer.go:247] Caches are synced for garbage collector
I0120 12:24:34.130776 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0120 12:24:34.185272 1 shared_informer.go:247] Caches are synced for garbage collector
I0120 12:24:34.768589 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0120 12:24:34.800381 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-2s7v5"
I0120 12:24:38.501163 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0120 12:26:39.510664 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0120 12:26:39.649257 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
==> kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] <==
W0120 12:28:41.679704 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 12:29:09.180254 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 12:29:13.330150 1 request.go:655] Throttling request took 1.048486792s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0120 12:29:14.181461 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 12:29:39.682136 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 12:29:45.831865 1 request.go:655] Throttling request took 1.048361271s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
W0120 12:29:46.683218 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 12:30:10.184183 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 12:30:18.333687 1 request.go:655] Throttling request took 1.048352548s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 12:30:19.187184 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 12:30:40.696442 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 12:30:50.838304 1 request.go:655] Throttling request took 1.048332598s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 12:30:51.689871 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 12:31:11.198558 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 12:31:23.340305 1 request.go:655] Throttling request took 1.048373362s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 12:31:24.191866 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 12:31:41.700313 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 12:31:55.842278 1 request.go:655] Throttling request took 1.048180465s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0120 12:31:56.694255 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 12:32:12.202399 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 12:32:28.350303 1 request.go:655] Throttling request took 1.048357981s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
W0120 12:32:29.202179 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0120 12:32:42.704827 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0120 12:33:00.852735 1 request.go:655] Throttling request took 1.048397918s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
W0120 12:33:01.704319 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] <==
I0120 12:27:21.329773 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0120 12:27:21.329861 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0120 12:27:21.351952 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0120 12:27:21.352064 1 server_others.go:185] Using iptables Proxier.
I0120 12:27:21.352354 1 server.go:650] Version: v1.20.0
I0120 12:27:21.353262 1 config.go:315] Starting service config controller
I0120 12:27:21.353410 1 shared_informer.go:240] Waiting for caches to sync for service config
I0120 12:27:21.353764 1 config.go:224] Starting endpoint slice config controller
I0120 12:27:21.354015 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0120 12:27:21.454012 1 shared_informer.go:247] Caches are synced for service config
I0120 12:27:21.454220 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] <==
I0120 12:24:36.223331 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0120 12:24:36.223483 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0120 12:24:36.253817 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0120 12:24:36.253925 1 server_others.go:185] Using iptables Proxier.
I0120 12:24:36.254169 1 server.go:650] Version: v1.20.0
I0120 12:24:36.258240 1 config.go:315] Starting service config controller
I0120 12:24:36.258258 1 shared_informer.go:240] Waiting for caches to sync for service config
I0120 12:24:36.258275 1 config.go:224] Starting endpoint slice config controller
I0120 12:24:36.258279 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0120 12:24:36.358389 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0120 12:24:36.358473 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] <==
I0120 12:24:09.467082 1 serving.go:331] Generated self-signed cert in-memory
W0120 12:24:15.146004 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0120 12:24:15.146049 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0120 12:24:15.146057 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0120 12:24:15.146062 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0120 12:24:15.236889 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0120 12:24:15.245116 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0120 12:24:15.245238 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 12:24:15.252877 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0120 12:24:15.251489 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 12:24:15.251578 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 12:24:15.251647 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 12:24:15.251773 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 12:24:15.251848 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 12:24:15.251925 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 12:24:15.257303 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 12:24:15.257827 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 12:24:15.259252 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 12:24:15.261549 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 12:24:15.261980 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 12:24:15.262931 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 12:24:16.104536 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 12:24:16.418500 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0120 12:24:18.153029 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] <==
I0120 12:27:11.768971 1 serving.go:331] Generated self-signed cert in-memory
W0120 12:27:17.269776 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0120 12:27:17.269815 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0120 12:27:17.269828 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0120 12:27:17.269844 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0120 12:27:17.375328 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0120 12:27:17.384992 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 12:27:17.385017 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 12:27:17.385326 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0120 12:27:17.485396 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: I0120 12:31:40.595804 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: I0120 12:31:53.596067 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: I0120 12:32:08.595889 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: I0120 12:32:21.595945 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: I0120 12:32:34.595780 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: I0120 12:32:46.595881 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:33:00 old-k8s-version-618033 kubelet[655]: I0120 12:33:00.595784 655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
Jan 20 12:33:00 old-k8s-version-618033 kubelet[655]: E0120 12:33:00.596134 655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
Jan 20 12:33:05 old-k8s-version-618033 kubelet[655]: E0120 12:33:05.607462 655 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Jan 20 12:33:05 old-k8s-version-618033 kubelet[655]: E0120 12:33:05.607934 655 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Jan 20 12:33:05 old-k8s-version-618033 kubelet[655]: E0120 12:33:05.608161 655 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-t7n5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-h8bg5_kube-system(67763d1
a-af35-4324-bb02-02c95b8fc186): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Jan 20 12:33:05 old-k8s-version-618033 kubelet[655]: E0120 12:33:05.608344 655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
==> kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] <==
2025/01/20 12:27:43 Starting overwatch
2025/01/20 12:27:43 Using namespace: kubernetes-dashboard
2025/01/20 12:27:43 Using in-cluster config to connect to apiserver
2025/01/20 12:27:43 Using secret token for csrf signing
2025/01/20 12:27:43 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/01/20 12:27:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/01/20 12:27:43 Successful initial request to the apiserver, version: v1.20.0
2025/01/20 12:27:43 Generating JWE encryption key
2025/01/20 12:27:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/01/20 12:27:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/01/20 12:27:43 Initializing JWE encryption key from synchronized object
2025/01/20 12:27:43 Creating in-cluster Sidecar client
2025/01/20 12:27:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:27:43 Serving insecurely on HTTP port: 9090
2025/01/20 12:28:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:28:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:29:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:29:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:30:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:30:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:31:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:31:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:32:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:32:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] <==
I0120 12:28:04.716298 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0120 12:28:04.736296 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0120 12:28:04.736554 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0120 12:28:22.218353 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0120 12:28:22.218713 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-618033_8349d959-2a16-481e-8df8-01ea447732a0!
I0120 12:28:22.221145 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f52819a7-3a4d-4a3b-a66a-9681d171e973", APIVersion:"v1", ResourceVersion:"850", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-618033_8349d959-2a16-481e-8df8-01ea447732a0 became leader
I0120 12:28:22.319902 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-618033_8349d959-2a16-481e-8df8-01ea447732a0!
==> storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] <==
I0120 12:27:19.476511 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0120 12:27:49.477991 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-618033 -n old-k8s-version-618033
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-618033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-h8bg5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-618033 describe pod metrics-server-9975d5f86-h8bg5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-618033 describe pod metrics-server-9975d5f86-h8bg5: exit status 1 (114.476378ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-h8bg5" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-618033 describe pod metrics-server-9975d5f86-h8bg5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (378.09s)