=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-368787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E1026 01:32:09.673362 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:32:20.130066 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-368787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m11.437413658s)
-- stdout --
* [old-k8s-version-368787] minikube v1.34.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=19868
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-368787" primary control-plane node in "old-k8s-version-368787" cluster
* Pulling base image v0.0.45-1729876044-19868 ...
* Restarting existing docker container for "old-k8s-version-368787" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-368787 addons enable metrics-server
* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
-- /stdout --
** stderr **
I1026 01:32:03.567361 2073170 out.go:345] Setting OutFile to fd 1 ...
I1026 01:32:03.567648 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 01:32:03.567676 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:32:03.567698 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 01:32:03.568030 2073170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
I1026 01:32:03.568501 2073170 out.go:352] Setting JSON to false
I1026 01:32:03.569593 2073170 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":33274,"bootTime":1729873050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1026 01:32:03.569704 2073170 start.go:139] virtualization:
I1026 01:32:03.576988 2073170 out.go:177] * [old-k8s-version-368787] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1026 01:32:03.579892 2073170 out.go:177] - MINIKUBE_LOCATION=19868
I1026 01:32:03.579970 2073170 notify.go:220] Checking for updates...
I1026 01:32:03.582336 2073170 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1026 01:32:03.584542 2073170 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
I1026 01:32:03.586580 2073170 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
I1026 01:32:03.588476 2073170 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1026 01:32:03.590851 2073170 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1026 01:32:03.593459 2073170 config.go:182] Loaded profile config "old-k8s-version-368787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1026 01:32:03.596009 2073170 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
I1026 01:32:03.598044 2073170 driver.go:394] Setting default libvirt URI to qemu:///system
I1026 01:32:03.649391 2073170 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1026 01:32:03.649585 2073170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1026 01:32:03.752644 2073170 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:32:03.742525094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1026 01:32:03.752753 2073170 docker.go:318] overlay module found
I1026 01:32:03.755258 2073170 out.go:177] * Using the docker driver based on existing profile
I1026 01:32:03.757210 2073170 start.go:297] selected driver: docker
I1026 01:32:03.757230 2073170 start.go:901] validating driver "docker" against &{Name:old-k8s-version-368787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1026 01:32:03.757346 2073170 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1026 01:32:03.758067 2073170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1026 01:32:03.874283 2073170 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:32:03.864453068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1026 01:32:03.874673 2073170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1026 01:32:03.874710 2073170 cni.go:84] Creating CNI manager for ""
I1026 01:32:03.874755 2073170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1026 01:32:03.874793 2073170 start.go:340] cluster config:
{Name:old-k8s-version-368787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1026 01:32:03.877155 2073170 out.go:177] * Starting "old-k8s-version-368787" primary control-plane node in "old-k8s-version-368787" cluster
I1026 01:32:03.879021 2073170 cache.go:121] Beginning downloading kic base image for docker with containerd
I1026 01:32:03.880756 2073170 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
I1026 01:32:03.882953 2073170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1026 01:32:03.882986 2073170 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
I1026 01:32:03.883005 2073170 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I1026 01:32:03.883032 2073170 cache.go:56] Caching tarball of preloaded images
I1026 01:32:03.883121 2073170 preload.go:172] Found /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1026 01:32:03.883131 2073170 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I1026 01:32:03.883244 2073170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/config.json ...
I1026 01:32:03.922366 2073170 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
I1026 01:32:03.922393 2073170 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
I1026 01:32:03.922407 2073170 cache.go:194] Successfully downloaded all kic artifacts
I1026 01:32:03.922432 2073170 start.go:360] acquireMachinesLock for old-k8s-version-368787: {Name:mk44d3baf3e6deb53ffd853750905e1ae52b8a7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1026 01:32:03.922498 2073170 start.go:364] duration metric: took 33.904µs to acquireMachinesLock for "old-k8s-version-368787"
I1026 01:32:03.922525 2073170 start.go:96] Skipping create...Using existing machine configuration
I1026 01:32:03.922533 2073170 fix.go:54] fixHost starting:
I1026 01:32:03.922806 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
I1026 01:32:03.957596 2073170 fix.go:112] recreateIfNeeded on old-k8s-version-368787: state=Stopped err=<nil>
W1026 01:32:03.957634 2073170 fix.go:138] unexpected machine state, will restart: <nil>
I1026 01:32:03.960165 2073170 out.go:177] * Restarting existing docker container for "old-k8s-version-368787" ...
I1026 01:32:03.962127 2073170 cli_runner.go:164] Run: docker start old-k8s-version-368787
I1026 01:32:04.369526 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
I1026 01:32:04.392083 2073170 kic.go:430] container "old-k8s-version-368787" state is running.
I1026 01:32:04.392506 2073170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-368787
I1026 01:32:04.421945 2073170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/config.json ...
I1026 01:32:04.422186 2073170 machine.go:93] provisionDockerMachine start ...
I1026 01:32:04.422247 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:04.457029 2073170 main.go:141] libmachine: Using SSH client type: native
I1026 01:32:04.457292 2073170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 35304 <nil> <nil>}
I1026 01:32:04.457302 2073170 main.go:141] libmachine: About to run SSH command:
hostname
I1026 01:32:04.458149 2073170 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42720->127.0.0.1:35304: read: connection reset by peer
I1026 01:32:07.590810 2073170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-368787
I1026 01:32:07.590833 2073170 ubuntu.go:169] provisioning hostname "old-k8s-version-368787"
I1026 01:32:07.590938 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:07.610393 2073170 main.go:141] libmachine: Using SSH client type: native
I1026 01:32:07.610662 2073170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 35304 <nil> <nil>}
I1026 01:32:07.610681 2073170 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-368787 && echo "old-k8s-version-368787" | sudo tee /etc/hostname
I1026 01:32:07.757056 2073170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-368787
I1026 01:32:07.757180 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:07.781219 2073170 main.go:141] libmachine: Using SSH client type: native
I1026 01:32:07.781479 2073170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 35304 <nil> <nil>}
I1026 01:32:07.781507 2073170 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-368787' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-368787/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-368787' | sudo tee -a /etc/hosts;
fi
fi
I1026 01:32:07.919293 2073170 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1026 01:32:07.919394 2073170 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19868-1857747/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-1857747/.minikube}
I1026 01:32:07.919429 2073170 ubuntu.go:177] setting up certificates
I1026 01:32:07.919463 2073170 provision.go:84] configureAuth start
I1026 01:32:07.919563 2073170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-368787
I1026 01:32:07.942146 2073170 provision.go:143] copyHostCerts
I1026 01:32:07.942219 2073170 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem, removing ...
I1026 01:32:07.942234 2073170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem
I1026 01:32:07.942309 2073170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem (1675 bytes)
I1026 01:32:07.942404 2073170 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem, removing ...
I1026 01:32:07.942408 2073170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem
I1026 01:32:07.942433 2073170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem (1078 bytes)
I1026 01:32:07.942529 2073170 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem, removing ...
I1026 01:32:07.942534 2073170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem
I1026 01:32:07.942556 2073170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem (1123 bytes)
I1026 01:32:07.942605 2073170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-368787 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-368787]
I1026 01:32:08.395094 2073170 provision.go:177] copyRemoteCerts
I1026 01:32:08.395169 2073170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1026 01:32:08.395216 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:08.411242 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:08.504767 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1026 01:32:08.542760 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1026 01:32:08.585116 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1026 01:32:08.614235 2073170 provision.go:87] duration metric: took 694.746008ms to configureAuth
I1026 01:32:08.614267 2073170 ubuntu.go:193] setting minikube options for container-runtime
I1026 01:32:08.614469 2073170 config.go:182] Loaded profile config "old-k8s-version-368787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1026 01:32:08.614483 2073170 machine.go:96] duration metric: took 4.192289872s to provisionDockerMachine
I1026 01:32:08.614491 2073170 start.go:293] postStartSetup for "old-k8s-version-368787" (driver="docker")
I1026 01:32:08.614502 2073170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1026 01:32:08.614559 2073170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1026 01:32:08.614605 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:08.633350 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:08.726371 2073170 ssh_runner.go:195] Run: cat /etc/os-release
I1026 01:32:08.729573 2073170 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1026 01:32:08.729612 2073170 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1026 01:32:08.729628 2073170 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1026 01:32:08.729636 2073170 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1026 01:32:08.729647 2073170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-1857747/.minikube/addons for local assets ...
I1026 01:32:08.729710 2073170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-1857747/.minikube/files for local assets ...
I1026 01:32:08.729794 2073170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem -> 18643732.pem in /etc/ssl/certs
I1026 01:32:08.729902 2073170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1026 01:32:08.738426 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem --> /etc/ssl/certs/18643732.pem (1708 bytes)
I1026 01:32:08.766686 2073170 start.go:296] duration metric: took 152.178881ms for postStartSetup
I1026 01:32:08.766778 2073170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1026 01:32:08.766837 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:08.788043 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:08.876138 2073170 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1026 01:32:08.880709 2073170 fix.go:56] duration metric: took 4.958168785s for fixHost
I1026 01:32:08.880738 2073170 start.go:83] releasing machines lock for "old-k8s-version-368787", held for 4.958226205s
I1026 01:32:08.880811 2073170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-368787
I1026 01:32:08.897764 2073170 ssh_runner.go:195] Run: cat /version.json
I1026 01:32:08.897842 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:08.898108 2073170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1026 01:32:08.898180 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:08.920650 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:08.922221 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:09.013457 2073170 ssh_runner.go:195] Run: systemctl --version
I1026 01:32:09.173576 2073170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1026 01:32:09.177993 2073170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1026 01:32:09.196148 2073170 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1026 01:32:09.196230 2073170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1026 01:32:09.205688 2073170 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1026 01:32:09.205721 2073170 start.go:495] detecting cgroup driver to use...
I1026 01:32:09.205753 2073170 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1026 01:32:09.205800 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1026 01:32:09.228824 2073170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1026 01:32:09.244979 2073170 docker.go:217] disabling cri-docker service (if available) ...
I1026 01:32:09.245052 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1026 01:32:09.269125 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1026 01:32:09.287831 2073170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1026 01:32:09.419060 2073170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1026 01:32:09.540597 2073170 docker.go:233] disabling docker service ...
I1026 01:32:09.540722 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1026 01:32:09.558694 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1026 01:32:09.575662 2073170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1026 01:32:09.691643 2073170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1026 01:32:09.813763 2073170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1026 01:32:09.828060 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1026 01:32:09.847871 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1026 01:32:09.860321 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1026 01:32:09.872079 2073170 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1026 01:32:09.872193 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1026 01:32:09.883427 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1026 01:32:09.895409 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1026 01:32:09.908189 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1026 01:32:09.919078 2073170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1026 01:32:09.930108 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1026 01:32:09.942640 2073170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1026 01:32:09.954367 2073170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1026 01:32:09.966519 2073170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1026 01:32:10.105866 2073170 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1026 01:32:10.358469 2073170 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1026 01:32:10.358541 2073170 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1026 01:32:10.363485 2073170 start.go:563] Will wait 60s for crictl version
I1026 01:32:10.363639 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:32:10.367187 2073170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1026 01:32:10.426897 2073170 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1026 01:32:10.427046 2073170 ssh_runner.go:195] Run: containerd --version
I1026 01:32:10.458317 2073170 ssh_runner.go:195] Run: containerd --version
I1026 01:32:10.486310 2073170 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
I1026 01:32:10.488290 2073170 cli_runner.go:164] Run: docker network inspect old-k8s-version-368787 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1026 01:32:10.508166 2073170 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1026 01:32:10.512429 2073170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1026 01:32:10.523128 2073170 kubeadm.go:883] updating cluster {Name:old-k8s-version-368787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1026 01:32:10.523257 2073170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1026 01:32:10.523310 2073170 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 01:32:10.588809 2073170 containerd.go:627] all images are preloaded for containerd runtime.
I1026 01:32:10.588838 2073170 containerd.go:534] Images already preloaded, skipping extraction
I1026 01:32:10.588902 2073170 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 01:32:10.667303 2073170 containerd.go:627] all images are preloaded for containerd runtime.
I1026 01:32:10.667357 2073170 cache_images.go:84] Images are preloaded, skipping loading
I1026 01:32:10.667366 2073170 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I1026 01:32:10.667509 2073170 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-368787 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1026 01:32:10.667611 2073170 ssh_runner.go:195] Run: sudo crictl info
I1026 01:32:10.763889 2073170 cni.go:84] Creating CNI manager for ""
I1026 01:32:10.763915 2073170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1026 01:32:10.763926 2073170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1026 01:32:10.763951 2073170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-368787 NodeName:old-k8s-version-368787 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1026 01:32:10.764084 2073170 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-368787"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1026 01:32:10.764154 2073170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I1026 01:32:10.776843 2073170 binaries.go:44] Found k8s binaries, skipping transfer
I1026 01:32:10.776915 2073170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1026 01:32:10.787122 2073170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I1026 01:32:10.810640 2073170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1026 01:32:10.833512 2073170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I1026 01:32:10.862881 2073170 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1026 01:32:10.870107 2073170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1026 01:32:10.903812 2073170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1026 01:32:11.059110 2073170 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1026 01:32:11.090969 2073170 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787 for IP: 192.168.76.2
I1026 01:32:11.090992 2073170 certs.go:194] generating shared ca certs ...
I1026 01:32:11.091009 2073170 certs.go:226] acquiring lock for ca certs: {Name:mkcea56562cecb76fcc8b6004959524ff574e9b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:32:11.091167 2073170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.key
I1026 01:32:11.091216 2073170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.key
I1026 01:32:11.091228 2073170 certs.go:256] generating profile certs ...
I1026 01:32:11.091363 2073170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.key
I1026 01:32:11.091440 2073170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/apiserver.key.8a4d58df
I1026 01:32:11.091492 2073170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/proxy-client.key
I1026 01:32:11.091607 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373.pem (1338 bytes)
W1026 01:32:11.091644 2073170 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373_empty.pem, impossibly tiny 0 bytes
I1026 01:32:11.091655 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem (1679 bytes)
I1026 01:32:11.091683 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem (1078 bytes)
I1026 01:32:11.091715 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem (1123 bytes)
I1026 01:32:11.091752 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem (1675 bytes)
I1026 01:32:11.091805 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem (1708 bytes)
I1026 01:32:11.092524 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1026 01:32:11.159947 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1026 01:32:11.233325 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1026 01:32:11.273907 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1026 01:32:11.304225 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I1026 01:32:11.334396 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1026 01:32:11.364406 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1026 01:32:11.390986 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1026 01:32:11.417392 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1026 01:32:11.441659 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373.pem --> /usr/share/ca-certificates/1864373.pem (1338 bytes)
I1026 01:32:11.467041 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem --> /usr/share/ca-certificates/18643732.pem (1708 bytes)
I1026 01:32:11.492813 2073170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1026 01:32:11.520467 2073170 ssh_runner.go:195] Run: openssl version
I1026 01:32:11.526888 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1026 01:32:11.537955 2073170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1026 01:32:11.542243 2073170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
I1026 01:32:11.542386 2073170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1026 01:32:11.551494 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1026 01:32:11.562215 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1864373.pem && ln -fs /usr/share/ca-certificates/1864373.pem /etc/ssl/certs/1864373.pem"
I1026 01:32:11.572923 2073170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1864373.pem
I1026 01:32:11.577506 2073170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:51 /usr/share/ca-certificates/1864373.pem
I1026 01:32:11.577626 2073170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1864373.pem
I1026 01:32:11.585281 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1864373.pem /etc/ssl/certs/51391683.0"
I1026 01:32:11.597021 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18643732.pem && ln -fs /usr/share/ca-certificates/18643732.pem /etc/ssl/certs/18643732.pem"
I1026 01:32:11.611697 2073170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18643732.pem
I1026 01:32:11.615085 2073170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:51 /usr/share/ca-certificates/18643732.pem
I1026 01:32:11.615147 2073170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18643732.pem
I1026 01:32:11.622225 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18643732.pem /etc/ssl/certs/3ec20f2e.0"
I1026 01:32:11.631601 2073170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1026 01:32:11.635176 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1026 01:32:11.642386 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1026 01:32:11.651235 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1026 01:32:11.658900 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1026 01:32:11.666055 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1026 01:32:11.673012 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1026 01:32:11.680109 2073170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-368787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1026 01:32:11.680221 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1026 01:32:11.680332 2073170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1026 01:32:11.724978 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:32:11.725044 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:32:11.725063 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:32:11.725074 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:32:11.725078 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:32:11.725086 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:32:11.725089 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:32:11.725092 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:32:11.725095 2073170 cri.go:89] found id: ""
I1026 01:32:11.725149 2073170 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1026 01:32:11.738691 2073170 cri.go:116] JSON = null
W1026 01:32:11.738747 2073170 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I1026 01:32:11.738839 2073170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1026 01:32:11.749650 2073170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1026 01:32:11.749675 2073170 kubeadm.go:593] restartPrimaryControlPlane start ...
I1026 01:32:11.749737 2073170 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1026 01:32:11.760662 2073170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1026 01:32:11.761096 2073170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-368787" does not appear in /home/jenkins/minikube-integration/19868-1857747/kubeconfig
I1026 01:32:11.761210 2073170 kubeconfig.go:62] /home/jenkins/minikube-integration/19868-1857747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-368787" cluster setting kubeconfig missing "old-k8s-version-368787" context setting]
I1026 01:32:11.761527 2073170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/kubeconfig: {Name:mk1a434cd0cc84bfd2a4a232bfd16b0239e78299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:32:11.762915 2073170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1026 01:32:11.771752 2073170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I1026 01:32:11.771785 2073170 kubeadm.go:597] duration metric: took 22.102755ms to restartPrimaryControlPlane
I1026 01:32:11.771795 2073170 kubeadm.go:394] duration metric: took 91.695709ms to StartCluster
I1026 01:32:11.771810 2073170 settings.go:142] acquiring lock: {Name:mk5238870f54ce90633b3ed0ddcc81fb678d064e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:32:11.771874 2073170 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19868-1857747/kubeconfig
I1026 01:32:11.772485 2073170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/kubeconfig: {Name:mk1a434cd0cc84bfd2a4a232bfd16b0239e78299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:32:11.772681 2073170 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1026 01:32:11.773047 2073170 config.go:182] Loaded profile config "old-k8s-version-368787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1026 01:32:11.773067 2073170 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1026 01:32:11.773192 2073170 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-368787"
I1026 01:32:11.773206 2073170 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-368787"
I1026 01:32:11.773215 2073170 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-368787"
I1026 01:32:11.773221 2073170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-368787"
I1026 01:32:11.773224 2073170 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-368787"
W1026 01:32:11.773231 2073170 addons.go:243] addon metrics-server should already be in state true
I1026 01:32:11.773258 2073170 host.go:66] Checking if "old-k8s-version-368787" exists ...
I1026 01:32:11.773527 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
I1026 01:32:11.773657 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
I1026 01:32:11.773209 2073170 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-368787"
W1026 01:32:11.773930 2073170 addons.go:243] addon storage-provisioner should already be in state true
I1026 01:32:11.773956 2073170 host.go:66] Checking if "old-k8s-version-368787" exists ...
I1026 01:32:11.774371 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
I1026 01:32:11.778317 2073170 out.go:177] * Verifying Kubernetes components...
I1026 01:32:11.778706 2073170 addons.go:69] Setting dashboard=true in profile "old-k8s-version-368787"
I1026 01:32:11.778729 2073170 addons.go:234] Setting addon dashboard=true in "old-k8s-version-368787"
W1026 01:32:11.778737 2073170 addons.go:243] addon dashboard should already be in state true
I1026 01:32:11.778780 2073170 host.go:66] Checking if "old-k8s-version-368787" exists ...
I1026 01:32:11.779287 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
I1026 01:32:11.780556 2073170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1026 01:32:11.820678 2073170 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1026 01:32:11.821780 2073170 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-368787"
W1026 01:32:11.821799 2073170 addons.go:243] addon default-storageclass should already be in state true
I1026 01:32:11.821825 2073170 host.go:66] Checking if "old-k8s-version-368787" exists ...
I1026 01:32:11.826689 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
I1026 01:32:11.830481 2073170 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1026 01:32:11.830503 2073170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1026 01:32:11.830567 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:11.844527 2073170 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1026 01:32:11.844670 2073170 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1026 01:32:11.851632 2073170 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1026 01:32:11.851658 2073170 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1026 01:32:11.851723 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:11.854726 2073170 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1026 01:32:11.858441 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1026 01:32:11.858468 2073170 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1026 01:32:11.858542 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:11.879262 2073170 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1026 01:32:11.879283 2073170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1026 01:32:11.879643 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
I1026 01:32:11.895046 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:11.906810 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:11.910912 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:11.936785 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
I1026 01:32:11.963820 2073170 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1026 01:32:12.015026 2073170 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-368787" to be "Ready" ...
I1026 01:32:12.075741 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1026 01:32:12.079911 2073170 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1026 01:32:12.079931 2073170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1026 01:32:12.137884 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1026 01:32:12.137963 2073170 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1026 01:32:12.140714 2073170 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1026 01:32:12.140783 2073170 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1026 01:32:12.161531 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1026 01:32:12.215078 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1026 01:32:12.215221 2073170 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1026 01:32:12.226102 2073170 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1026 01:32:12.226199 2073170 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1026 01:32:12.274048 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1026 01:32:12.309163 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1026 01:32:12.309273 2073170 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1026 01:32:12.357749 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1026 01:32:12.357822 2073170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1026 01:32:12.403263 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1026 01:32:12.403366 2073170 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W1026 01:32:12.408667 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.408802 2073170 retry.go:31] will retry after 310.718992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:12.443246 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.443377 2073170 retry.go:31] will retry after 242.748817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.450179 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1026 01:32:12.450276 2073170 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
W1026 01:32:12.452912 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.453021 2073170 retry.go:31] will retry after 228.853978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.473007 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1026 01:32:12.473035 2073170 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1026 01:32:12.492138 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1026 01:32:12.492163 2073170 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1026 01:32:12.517237 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1026 01:32:12.517274 2073170 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1026 01:32:12.537740 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1026 01:32:12.633682 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.633714 2073170 retry.go:31] will retry after 295.010345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.682979 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1026 01:32:12.686370 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1026 01:32:12.719944 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1026 01:32:12.802406 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.802456 2073170 retry.go:31] will retry after 349.317562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:12.845179 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.845219 2073170 retry.go:31] will retry after 362.541488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:12.875425 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.875460 2073170 retry.go:31] will retry after 225.41973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:12.929651 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1026 01:32:13.017588 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.017679 2073170 retry.go:31] will retry after 326.956571ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.101997 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1026 01:32:13.152472 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1026 01:32:13.208632 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1026 01:32:13.258868 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.258959 2073170 retry.go:31] will retry after 457.097198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:13.339025 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.339111 2073170 retry.go:31] will retry after 838.797017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.345212 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1026 01:32:13.351047 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.351137 2073170 retry.go:31] will retry after 752.009894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:13.439683 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.439719 2073170 retry.go:31] will retry after 838.127127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.716979 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1026 01:32:13.818819 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:13.818853 2073170 retry.go:31] will retry after 745.949942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:14.016572 2073170 node_ready.go:53] error getting node "old-k8s-version-368787": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-368787": dial tcp 192.168.76.2:8443: connect: connection refused
I1026 01:32:14.103895 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1026 01:32:14.178422 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1026 01:32:14.192606 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:14.192642 2073170 retry.go:31] will retry after 1.051748191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:14.270245 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:14.270323 2073170 retry.go:31] will retry after 428.664476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:14.278496 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1026 01:32:14.397404 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:14.397451 2073170 retry.go:31] will retry after 968.409914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:14.565363 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1026 01:32:14.699933 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1026 01:32:14.787541 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:14.787590 2073170 retry.go:31] will retry after 1.554636804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:14.936864 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:14.936901 2073170 retry.go:31] will retry after 728.862459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:15.245130 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1026 01:32:15.366534 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1026 01:32:15.402051 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:15.402100 2073170 retry.go:31] will retry after 833.114051ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:15.542313 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:15.542350 2073170 retry.go:31] will retry after 857.512374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:15.666713 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1026 01:32:15.804572 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:15.804611 2073170 retry.go:31] will retry after 2.707466245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:16.235760 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1026 01:32:16.322988 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:16.323024 2073170 retry.go:31] will retry after 2.705849654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:16.343250 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1026 01:32:16.400873 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1026 01:32:16.437288 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:16.437325 2073170 retry.go:31] will retry after 2.211013377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1026 01:32:16.499076 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:16.499114 2073170 retry.go:31] will retry after 1.172239395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:16.516601 2073170 node_ready.go:53] error getting node "old-k8s-version-368787": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-368787": dial tcp 192.168.76.2:8443: connect: connection refused
I1026 01:32:17.672290 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1026 01:32:17.755271 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:17.755425 2073170 retry.go:31] will retry after 1.852126673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:18.513042 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1026 01:32:18.586978 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:18.587017 2073170 retry.go:31] will retry after 3.925391068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:18.649384 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1026 01:32:18.734166 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:18.734202 2073170 retry.go:31] will retry after 1.759836158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:19.015874 2073170 node_ready.go:53] error getting node "old-k8s-version-368787": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-368787": dial tcp 192.168.76.2:8443: connect: connection refused
I1026 01:32:19.029256 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1026 01:32:19.109954 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:19.109992 2073170 retry.go:31] will retry after 3.098320623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:19.608129 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1026 01:32:19.726372 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:19.726404 2073170 retry.go:31] will retry after 3.576047191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:20.494262 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1026 01:32:20.635207 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:20.635239 2073170 retry.go:31] will retry after 5.571537164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1026 01:32:22.209033 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1026 01:32:22.513410 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1026 01:32:23.302722 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1026 01:32:26.207194 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1026 01:32:28.130809 2073170 node_ready.go:49] node "old-k8s-version-368787" has status "Ready":"True"
I1026 01:32:28.130832 2073170 node_ready.go:38] duration metric: took 16.115712125s for node "old-k8s-version-368787" to be "Ready" ...
I1026 01:32:28.130843 2073170 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1026 01:32:28.304565 2073170 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-q7ksx" in "kube-system" namespace to be "Ready" ...
I1026 01:32:28.406789 2073170 pod_ready.go:93] pod "coredns-74ff55c5b-q7ksx" in "kube-system" namespace has status "Ready":"True"
I1026 01:32:28.406818 2073170 pod_ready.go:82] duration metric: took 102.164226ms for pod "coredns-74ff55c5b-q7ksx" in "kube-system" namespace to be "Ready" ...
I1026 01:32:28.406832 2073170 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
I1026 01:32:28.443881 2073170 pod_ready.go:93] pod "etcd-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"True"
I1026 01:32:28.443911 2073170 pod_ready.go:82] duration metric: took 37.070533ms for pod "etcd-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
I1026 01:32:28.443927 2073170 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
I1026 01:32:29.218578 2073170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.009512798s)
I1026 01:32:29.218734 2073170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.705294931s)
I1026 01:32:29.218764 2073170 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-368787"
I1026 01:32:29.543645 2073170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.336413373s)
I1026 01:32:29.543753 2073170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.240997796s)
I1026 01:32:29.545882 2073170 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-368787 addons enable metrics-server
I1026 01:32:29.547642 2073170 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
I1026 01:32:29.549529 2073170 addons.go:510] duration metric: took 17.776471913s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
I1026 01:32:30.450495 2073170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:32.952429 2073170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:35.450253 2073170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:36.950677 2073170 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"True"
I1026 01:32:36.950706 2073170 pod_ready.go:82] duration metric: took 8.50673388s for pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
I1026 01:32:36.950719 2073170 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
I1026 01:32:38.957321 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:41.459646 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:43.957579 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:45.962028 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:48.457308 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:50.458159 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:52.459046 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:54.958472 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:57.457937 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:32:59.458621 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:01.957173 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:03.958196 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:06.458219 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:08.462240 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:10.957381 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:13.459235 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:15.957470 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:18.457384 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:20.457905 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:22.958581 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:25.457350 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:27.458223 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:29.957557 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:32.456980 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:34.457323 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:36.458114 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:38.468892 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:40.956467 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:41.956496 2073170 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"True"
I1026 01:33:41.956523 2073170 pod_ready.go:82] duration metric: took 1m5.005795554s for pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
I1026 01:33:41.956534 2073170 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9q264" in "kube-system" namespace to be "Ready" ...
I1026 01:33:41.961562 2073170 pod_ready.go:93] pod "kube-proxy-9q264" in "kube-system" namespace has status "Ready":"True"
I1026 01:33:41.961591 2073170 pod_ready.go:82] duration metric: took 5.049617ms for pod "kube-proxy-9q264" in "kube-system" namespace to be "Ready" ...
I1026 01:33:41.961602 2073170 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
I1026 01:33:43.967942 2073170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:45.968308 2073170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:47.977225 2073170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:48.967594 2073170 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"True"
I1026 01:33:48.967619 2073170 pod_ready.go:82] duration metric: took 7.00600995s for pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
I1026 01:33:48.967630 2073170 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace to be "Ready" ...
I1026 01:33:50.978010 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:52.978655 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:55.475643 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:57.476707 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:33:59.975150 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:02.475065 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:04.488646 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:06.978527 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:09.476165 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:11.975712 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:13.990814 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:16.479158 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:18.977978 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:20.978225 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:23.477539 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:25.974399 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:27.976880 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:29.980173 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:32.475115 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:34.478508 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:36.479994 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:38.983668 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:41.476950 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:43.485811 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:45.975127 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:47.975496 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:49.977110 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:52.476096 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:54.478172 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:56.977397 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:34:59.482319 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:01.974023 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:03.975465 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:05.977964 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:08.485332 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:10.974374 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:12.975363 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:14.977421 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:16.980582 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:19.475486 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:21.478176 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:23.978834 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:25.989106 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:28.486736 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:30.977452 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:32.979017 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:35.478130 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:37.975067 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:39.975943 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:41.979275 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:44.477363 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:46.479551 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:48.974097 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:50.978780 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:53.474551 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:55.474782 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:57.478975 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:35:59.975807 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:02.476744 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:04.976508 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:06.977878 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:09.477246 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:11.974267 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:13.978184 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:15.978303 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:17.978385 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:19.992616 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:22.476294 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:24.477658 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:26.979767 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:29.474149 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:31.474259 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:33.478228 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:35.977110 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:37.977162 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:40.477632 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:42.979661 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:45.475566 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:47.480122 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:49.975101 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:51.981869 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:54.479101 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:56.979657 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:36:59.476151 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:01.973872 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:03.974794 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:05.980099 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:08.476353 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:10.974308 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:13.473906 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:15.474149 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:17.474272 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:19.474878 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:21.481450 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:23.973530 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:26.474692 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:28.974421 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:31.474651 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:33.974680 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:36.477558 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:38.973325 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:40.978365 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:43.474016 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:45.475440 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:47.476030 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:48.981712 2073170 pod_ready.go:82] duration metric: took 4m0.014058258s for pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace to be "Ready" ...
E1026 01:37:48.981744 2073170 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1026 01:37:48.981801 2073170 pod_ready.go:39] duration metric: took 5m20.850945581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1026 01:37:48.981824 2073170 api_server.go:52] waiting for apiserver process to appear ...
I1026 01:37:48.981925 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1026 01:37:48.982046 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1026 01:37:49.061661 2073170 cri.go:89] found id: "caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
I1026 01:37:49.061738 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:37:49.061758 2073170 cri.go:89] found id: ""
I1026 01:37:49.061783 2073170 logs.go:282] 2 containers: [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d]
I1026 01:37:49.061874 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.066064 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.070465 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1026 01:37:49.070527 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1026 01:37:49.152162 2073170 cri.go:89] found id: "3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
I1026 01:37:49.152183 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:37:49.152189 2073170 cri.go:89] found id: ""
I1026 01:37:49.152196 2073170 logs.go:282] 2 containers: [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272]
I1026 01:37:49.152250 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.157843 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.161728 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1026 01:37:49.161874 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1026 01:37:49.213678 2073170 cri.go:89] found id: "c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
I1026 01:37:49.213756 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:37:49.213776 2073170 cri.go:89] found id: ""
I1026 01:37:49.213800 2073170 logs.go:282] 2 containers: [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e]
I1026 01:37:49.213885 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.220177 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.232203 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1026 01:37:49.232345 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1026 01:37:49.294557 2073170 cri.go:89] found id: "9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
I1026 01:37:49.294645 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:37:49.294665 2073170 cri.go:89] found id: ""
I1026 01:37:49.294689 2073170 logs.go:282] 2 containers: [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7]
I1026 01:37:49.294782 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.299146 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.303215 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1026 01:37:49.303357 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1026 01:37:49.350569 2073170 cri.go:89] found id: "f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
I1026 01:37:49.350646 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:37:49.350668 2073170 cri.go:89] found id: ""
I1026 01:37:49.350691 2073170 logs.go:282] 2 containers: [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670]
I1026 01:37:49.350780 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.356495 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.360987 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1026 01:37:49.361095 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1026 01:37:49.416682 2073170 cri.go:89] found id: "407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
I1026 01:37:49.416758 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:37:49.416778 2073170 cri.go:89] found id: ""
I1026 01:37:49.416800 2073170 logs.go:282] 2 containers: [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526]
I1026 01:37:49.416889 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.421667 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.425830 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1026 01:37:49.425971 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1026 01:37:49.476562 2073170 cri.go:89] found id: "19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
I1026 01:37:49.476639 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:37:49.476670 2073170 cri.go:89] found id: ""
I1026 01:37:49.476691 2073170 logs.go:282] 2 containers: [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b]
I1026 01:37:49.476777 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.481392 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.485639 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1026 01:37:49.485779 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1026 01:37:49.536284 2073170 cri.go:89] found id: "f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
I1026 01:37:49.536306 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:37:49.536312 2073170 cri.go:89] found id: ""
I1026 01:37:49.536320 2073170 logs.go:282] 2 containers: [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad]
I1026 01:37:49.536379 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.540772 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.545367 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1026 01:37:49.545440 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1026 01:37:49.595865 2073170 cri.go:89] found id: "ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
I1026 01:37:49.595886 2073170 cri.go:89] found id: ""
I1026 01:37:49.595894 2073170 logs.go:282] 1 containers: [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125]
I1026 01:37:49.595953 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.606230 2073170 logs.go:123] Gathering logs for coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] ...
I1026 01:37:49.606256 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:37:49.660000 2073170 logs.go:123] Gathering logs for kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] ...
I1026 01:37:49.660082 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
I1026 01:37:49.717276 2073170 logs.go:123] Gathering logs for kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] ...
I1026 01:37:49.717309 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:37:49.815045 2073170 logs.go:123] Gathering logs for kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] ...
I1026 01:37:49.815084 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:37:49.932109 2073170 logs.go:123] Gathering logs for kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] ...
I1026 01:37:49.932149 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
I1026 01:37:50.002376 2073170 logs.go:123] Gathering logs for kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] ...
I1026 01:37:50.002417 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:37:50.059980 2073170 logs.go:123] Gathering logs for kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] ...
I1026 01:37:50.060057 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:37:50.142243 2073170 logs.go:123] Gathering logs for container status ...
I1026 01:37:50.142278 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1026 01:37:50.273887 2073170 logs.go:123] Gathering logs for kubelet ...
I1026 01:37:50.273926 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1026 01:37:50.400116 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142066 658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-44wvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-44wvw" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.400368 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142157 658 reflector.go:138] object-"kube-system"/"metrics-server-token-7tsjh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7tsjh" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.400590 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142205 658 reflector.go:138] object-"kube-system"/"coredns-token-n94ql": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n94ql" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.400798 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142249 658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.401019 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142293 658 reflector.go:138] object-"kube-system"/"kube-proxy-token-47vp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-47vp6" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.401237 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142333 658 reflector.go:138] object-"kube-system"/"kindnet-token-qqrpm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qqrpm" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.401445 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142465 658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.401657 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142549 658 reflector.go:138] object-"default"/"default-token-2jcx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2jcx9" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.409687 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.113479 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.411310 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.907637 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.414173 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:44 old-k8s-version-368787 kubelet[658]: E1026 01:32:44.742023 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.416401 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:56 old-k8s-version-368787 kubelet[658]: E1026 01:32:56.075801 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.416745 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.080022 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.416936 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.735904 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.417608 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:01 old-k8s-version-368787 kubelet[658]: E1026 01:33:01.507025 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.420461 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:11 old-k8s-version-368787 kubelet[658]: E1026 01:33:11.743711 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.421064 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:17 old-k8s-version-368787 kubelet[658]: E1026 01:33:17.172767 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.421398 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:21 old-k8s-version-368787 kubelet[658]: E1026 01:33:21.507449 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.421588 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:24 old-k8s-version-368787 kubelet[658]: E1026 01:33:24.731672 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.421924 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:32 old-k8s-version-368787 kubelet[658]: E1026 01:33:32.731878 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.422114 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:37 old-k8s-version-368787 kubelet[658]: E1026 01:33:37.731969 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.422719 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:46 old-k8s-version-368787 kubelet[658]: E1026 01:33:46.262782 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.422908 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:48 old-k8s-version-368787 kubelet[658]: E1026 01:33:48.732324 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.423246 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:51 old-k8s-version-368787 kubelet[658]: E1026 01:33:51.507083 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.425832 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:03 old-k8s-version-368787 kubelet[658]: E1026 01:34:03.750208 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.426182 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:06 old-k8s-version-368787 kubelet[658]: E1026 01:34:06.731790 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.426374 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:18 old-k8s-version-368787 kubelet[658]: E1026 01:34:18.732360 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.426713 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:21 old-k8s-version-368787 kubelet[658]: E1026 01:34:21.731670 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.426907 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:33 old-k8s-version-368787 kubelet[658]: E1026 01:34:33.732041 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.427535 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:37 old-k8s-version-368787 kubelet[658]: E1026 01:34:37.414157 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.427870 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:41 old-k8s-version-368787 kubelet[658]: E1026 01:34:41.507110 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.428130 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:48 old-k8s-version-368787 kubelet[658]: E1026 01:34:48.731821 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.428468 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:54 old-k8s-version-368787 kubelet[658]: E1026 01:34:54.731233 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.428662 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:59 old-k8s-version-368787 kubelet[658]: E1026 01:34:59.732434 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.428993 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:06 old-k8s-version-368787 kubelet[658]: E1026 01:35:06.731705 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.429180 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:12 old-k8s-version-368787 kubelet[658]: E1026 01:35:12.731827 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.429561 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:19 old-k8s-version-368787 kubelet[658]: E1026 01:35:19.732195 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.432106 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:25 old-k8s-version-368787 kubelet[658]: E1026 01:35:25.742123 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.432445 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:33 old-k8s-version-368787 kubelet[658]: E1026 01:35:33.731192 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.432634 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:40 old-k8s-version-368787 kubelet[658]: E1026 01:35:40.736836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.432982 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:48 old-k8s-version-368787 kubelet[658]: E1026 01:35:48.731218 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.433171 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:53 old-k8s-version-368787 kubelet[658]: E1026 01:35:53.733617 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.433771 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:03 old-k8s-version-368787 kubelet[658]: E1026 01:36:03.662574 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.433959 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:06 old-k8s-version-368787 kubelet[658]: E1026 01:36:06.731650 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.434293 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:11 old-k8s-version-368787 kubelet[658]: E1026 01:36:11.507131 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.434525 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:21 old-k8s-version-368787 kubelet[658]: E1026 01:36:21.731783 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.434861 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:23 old-k8s-version-368787 kubelet[658]: E1026 01:36:23.731690 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.435204 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:34 old-k8s-version-368787 kubelet[658]: E1026 01:36:34.731309 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.435398 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.435735 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.435924 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.436258 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.436447 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.436780 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.436968 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.437304 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.437492 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.437824 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.438014 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1026 01:37:50.438025 2073170 logs.go:123] Gathering logs for dmesg ...
I1026 01:37:50.438040 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1026 01:37:50.454757 2073170 logs.go:123] Gathering logs for describe nodes ...
I1026 01:37:50.454785 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1026 01:37:50.669583 2073170 logs.go:123] Gathering logs for containerd ...
I1026 01:37:50.669862 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1026 01:37:50.736640 2073170 logs.go:123] Gathering logs for coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] ...
I1026 01:37:50.736718 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
I1026 01:37:50.791237 2073170 logs.go:123] Gathering logs for kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] ...
I1026 01:37:50.791266 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
I1026 01:37:50.860038 2073170 logs.go:123] Gathering logs for storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] ...
I1026 01:37:50.860076 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:37:50.936359 2073170 logs.go:123] Gathering logs for kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] ...
I1026 01:37:50.936407 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:37:51.078999 2073170 logs.go:123] Gathering logs for kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] ...
I1026 01:37:51.079039 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
I1026 01:37:51.197002 2073170 logs.go:123] Gathering logs for storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] ...
I1026 01:37:51.197040 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
I1026 01:37:51.270252 2073170 logs.go:123] Gathering logs for kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] ...
I1026 01:37:51.270281 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
I1026 01:37:51.351708 2073170 logs.go:123] Gathering logs for kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] ...
I1026 01:37:51.351739 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
I1026 01:37:51.428214 2073170 logs.go:123] Gathering logs for etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] ...
I1026 01:37:51.428289 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
I1026 01:37:51.480860 2073170 logs.go:123] Gathering logs for etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] ...
I1026 01:37:51.480949 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:37:51.533094 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:37:51.533165 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1026 01:37:51.533239 2073170 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1026 01:37:51.533278 2073170 out.go:270] Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:51.533314 2073170 out.go:270] Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:51.533366 2073170 out.go:270] Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:51.533403 2073170 out.go:270] Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:51.533452 2073170 out.go:270] Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1026 01:37:51.533488 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:37:51.533508 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 01:38:01.535291 2073170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1026 01:38:01.547514 2073170 api_server.go:72] duration metric: took 5m49.774798849s to wait for apiserver process to appear ...
I1026 01:38:01.547541 2073170 api_server.go:88] waiting for apiserver healthz status ...
I1026 01:38:01.547576 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1026 01:38:01.547632 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1026 01:38:01.587732 2073170 cri.go:89] found id: "caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
I1026 01:38:01.587754 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:38:01.587759 2073170 cri.go:89] found id: ""
I1026 01:38:01.587766 2073170 logs.go:282] 2 containers: [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d]
I1026 01:38:01.587828 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.592229 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.595984 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1026 01:38:01.596068 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1026 01:38:01.639841 2073170 cri.go:89] found id: "3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
I1026 01:38:01.639871 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:38:01.639876 2073170 cri.go:89] found id: ""
I1026 01:38:01.639884 2073170 logs.go:282] 2 containers: [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272]
I1026 01:38:01.639994 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.644607 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.648285 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1026 01:38:01.648362 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1026 01:38:01.720748 2073170 cri.go:89] found id: "c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
I1026 01:38:01.720774 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:38:01.720780 2073170 cri.go:89] found id: ""
I1026 01:38:01.720787 2073170 logs.go:282] 2 containers: [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e]
I1026 01:38:01.720846 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.726066 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.732857 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1026 01:38:01.732992 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1026 01:38:01.814967 2073170 cri.go:89] found id: "9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
I1026 01:38:01.814997 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:38:01.815005 2073170 cri.go:89] found id: ""
I1026 01:38:01.815012 2073170 logs.go:282] 2 containers: [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7]
I1026 01:38:01.815203 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.819665 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.826464 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1026 01:38:01.826610 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1026 01:38:01.897678 2073170 cri.go:89] found id: "f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
I1026 01:38:01.897708 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:38:01.897714 2073170 cri.go:89] found id: ""
I1026 01:38:01.897727 2073170 logs.go:282] 2 containers: [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670]
I1026 01:38:01.897878 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.922934 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.928999 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1026 01:38:01.929123 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1026 01:38:02.046457 2073170 cri.go:89] found id: "407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
I1026 01:38:02.046487 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:38:02.046498 2073170 cri.go:89] found id: ""
I1026 01:38:02.046512 2073170 logs.go:282] 2 containers: [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526]
I1026 01:38:02.046624 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.067786 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.076203 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1026 01:38:02.076352 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1026 01:38:02.150567 2073170 cri.go:89] found id: "19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
I1026 01:38:02.150612 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:38:02.150617 2073170 cri.go:89] found id: ""
I1026 01:38:02.150673 2073170 logs.go:282] 2 containers: [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b]
I1026 01:38:02.150774 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.156731 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.163096 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1026 01:38:02.163254 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1026 01:38:02.248045 2073170 cri.go:89] found id: "ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
I1026 01:38:02.248072 2073170 cri.go:89] found id: ""
I1026 01:38:02.248081 2073170 logs.go:282] 1 containers: [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125]
I1026 01:38:02.248231 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.258094 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1026 01:38:02.258253 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1026 01:38:02.359394 2073170 cri.go:89] found id: "f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
I1026 01:38:02.359428 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:38:02.359433 2073170 cri.go:89] found id: ""
I1026 01:38:02.359441 2073170 logs.go:282] 2 containers: [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad]
I1026 01:38:02.359696 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.368425 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.375386 2073170 logs.go:123] Gathering logs for storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] ...
I1026 01:38:02.375416 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:38:02.483267 2073170 logs.go:123] Gathering logs for dmesg ...
I1026 01:38:02.483431 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1026 01:38:02.539716 2073170 logs.go:123] Gathering logs for kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] ...
I1026 01:38:02.539755 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
I1026 01:38:02.733373 2073170 logs.go:123] Gathering logs for kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] ...
I1026 01:38:02.733425 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
I1026 01:38:02.854359 2073170 logs.go:123] Gathering logs for kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] ...
I1026 01:38:02.854394 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:38:02.955435 2073170 logs.go:123] Gathering logs for kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] ...
I1026 01:38:02.955469 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:38:03.040330 2073170 logs.go:123] Gathering logs for kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] ...
I1026 01:38:03.040364 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:38:03.184875 2073170 logs.go:123] Gathering logs for container status ...
I1026 01:38:03.184928 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1026 01:38:03.308598 2073170 logs.go:123] Gathering logs for kubelet ...
I1026 01:38:03.308637 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1026 01:38:03.395084 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142066 658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-44wvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-44wvw" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.395487 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142157 658 reflector.go:138] object-"kube-system"/"metrics-server-token-7tsjh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7tsjh" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.395746 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142205 658 reflector.go:138] object-"kube-system"/"coredns-token-n94ql": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n94ql" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.395995 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142249 658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.396249 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142293 658 reflector.go:138] object-"kube-system"/"kube-proxy-token-47vp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-47vp6" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.396495 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142333 658 reflector.go:138] object-"kube-system"/"kindnet-token-qqrpm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qqrpm" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.396759 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142465 658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.397012 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142549 658 reflector.go:138] object-"default"/"default-token-2jcx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2jcx9" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.405224 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.113479 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.406935 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.907637 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.410090 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:44 old-k8s-version-368787 kubelet[658]: E1026 01:32:44.742023 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.412301 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:56 old-k8s-version-368787 kubelet[658]: E1026 01:32:56.075801 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.412690 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.080022 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.412911 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.735904 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.413709 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:01 old-k8s-version-368787 kubelet[658]: E1026 01:33:01.507025 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.416720 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:11 old-k8s-version-368787 kubelet[658]: E1026 01:33:11.743711 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.417382 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:17 old-k8s-version-368787 kubelet[658]: E1026 01:33:17.172767 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.417786 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:21 old-k8s-version-368787 kubelet[658]: E1026 01:33:21.507449 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.418027 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:24 old-k8s-version-368787 kubelet[658]: E1026 01:33:24.731672 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.418426 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:32 old-k8s-version-368787 kubelet[658]: E1026 01:33:32.731878 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.418683 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:37 old-k8s-version-368787 kubelet[658]: E1026 01:33:37.731969 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.419380 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:46 old-k8s-version-368787 kubelet[658]: E1026 01:33:46.262782 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.419604 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:48 old-k8s-version-368787 kubelet[658]: E1026 01:33:48.732324 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.419986 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:51 old-k8s-version-368787 kubelet[658]: E1026 01:33:51.507083 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.422699 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:03 old-k8s-version-368787 kubelet[658]: E1026 01:34:03.750208 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.423152 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:06 old-k8s-version-368787 kubelet[658]: E1026 01:34:06.731790 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.423380 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:18 old-k8s-version-368787 kubelet[658]: E1026 01:34:18.732360 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.423781 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:21 old-k8s-version-368787 kubelet[658]: E1026 01:34:21.731670 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.423994 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:33 old-k8s-version-368787 kubelet[658]: E1026 01:34:33.732041 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.424632 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:37 old-k8s-version-368787 kubelet[658]: E1026 01:34:37.414157 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.425085 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:41 old-k8s-version-368787 kubelet[658]: E1026 01:34:41.507110 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.425345 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:48 old-k8s-version-368787 kubelet[658]: E1026 01:34:48.731821 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.425722 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:54 old-k8s-version-368787 kubelet[658]: E1026 01:34:54.731233 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.425954 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:59 old-k8s-version-368787 kubelet[658]: E1026 01:34:59.732434 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.426317 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:06 old-k8s-version-368787 kubelet[658]: E1026 01:35:06.731705 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.426524 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:12 old-k8s-version-368787 kubelet[658]: E1026 01:35:12.731827 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.426905 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:19 old-k8s-version-368787 kubelet[658]: E1026 01:35:19.732195 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.429672 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:25 old-k8s-version-368787 kubelet[658]: E1026 01:35:25.742123 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.430063 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:33 old-k8s-version-368787 kubelet[658]: E1026 01:35:33.731192 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.430283 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:40 old-k8s-version-368787 kubelet[658]: E1026 01:35:40.736836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.430667 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:48 old-k8s-version-368787 kubelet[658]: E1026 01:35:48.731218 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.430891 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:53 old-k8s-version-368787 kubelet[658]: E1026 01:35:53.733617 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.431531 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:03 old-k8s-version-368787 kubelet[658]: E1026 01:36:03.662574 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.431751 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:06 old-k8s-version-368787 kubelet[658]: E1026 01:36:06.731650 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.432125 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:11 old-k8s-version-368787 kubelet[658]: E1026 01:36:11.507131 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.432342 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:21 old-k8s-version-368787 kubelet[658]: E1026 01:36:21.731783 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.432691 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:23 old-k8s-version-368787 kubelet[658]: E1026 01:36:23.731690 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.433042 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:34 old-k8s-version-368787 kubelet[658]: E1026 01:36:34.731309 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.433355 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.433731 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.433935 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.434295 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.434516 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.434912 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.435166 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.435545 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.435770 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.436139 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.436351 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.436716 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.436953 2073170 logs.go:138] Found kubelet problem: Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1026 01:38:03.436967 2073170 logs.go:123] Gathering logs for etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] ...
I1026 01:38:03.436992 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
I1026 01:38:03.527806 2073170 logs.go:123] Gathering logs for etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] ...
I1026 01:38:03.527843 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:38:03.598581 2073170 logs.go:123] Gathering logs for kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] ...
I1026 01:38:03.598756 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:38:03.677581 2073170 logs.go:123] Gathering logs for containerd ...
I1026 01:38:03.677658 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1026 01:38:03.753106 2073170 logs.go:123] Gathering logs for describe nodes ...
I1026 01:38:03.753195 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1026 01:38:03.997226 2073170 logs.go:123] Gathering logs for coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] ...
I1026 01:38:03.997300 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:38:04.087455 2073170 logs.go:123] Gathering logs for kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] ...
I1026 01:38:04.087550 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
I1026 01:38:04.175664 2073170 logs.go:123] Gathering logs for kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] ...
I1026 01:38:04.175745 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
I1026 01:38:04.270341 2073170 logs.go:123] Gathering logs for kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] ...
I1026 01:38:04.270371 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
I1026 01:38:04.370143 2073170 logs.go:123] Gathering logs for storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] ...
I1026 01:38:04.370175 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
I1026 01:38:04.447078 2073170 logs.go:123] Gathering logs for kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] ...
I1026 01:38:04.447109 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:38:04.545939 2073170 logs.go:123] Gathering logs for coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] ...
I1026 01:38:04.545976 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
I1026 01:38:04.715996 2073170 logs.go:123] Gathering logs for kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] ...
I1026 01:38:04.716021 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
I1026 01:38:04.880261 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:38:04.880333 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1026 01:38:04.880402 2073170 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1026 01:38:04.880449 2073170 out.go:270] Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:04.880486 2073170 out.go:270] Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:04.880529 2073170 out.go:270] Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:04.880562 2073170 out.go:270] Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:04.880596 2073170 out.go:270] Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1026 01:38:04.880641 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:38:04.880663 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 01:38:14.881298 2073170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1026 01:38:14.898252 2073170 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1026 01:38:14.902356 2073170 out.go:201]
W1026 01:38:14.905153 2073170 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1026 01:38:14.905189 2073170 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W1026 01:38:14.905207 2073170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W1026 01:38:14.905214 2073170 out.go:270] *
*
W1026 01:38:14.906019 2073170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1026 01:38:14.907947 2073170 out.go:201]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-368787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-368787
helpers_test.go:235: (dbg) docker inspect old-k8s-version-368787:
-- stdout --
[
{
"Id": "7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a",
"Created": "2024-10-26T01:29:00.83665828Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2073366,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-10-26T01:32:04.112953704Z",
"FinishedAt": "2024-10-26T01:32:02.727115138Z"
},
"Image": "sha256:e536a13478ac3e12b0286f2242f0931e32c32cc3eeb0139a219c9133dcd9fe90",
"ResolvConfPath": "/var/lib/docker/containers/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a/hostname",
"HostsPath": "/var/lib/docker/containers/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a/hosts",
"LogPath": "/var/lib/docker/containers/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a-json.log",
"Name": "/old-k8s-version-368787",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-368787:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-368787",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/729fddbdd51de18b1d80fbfbb0e03fea5a6c4b3b58ef10c9fc1272371176757a-init/diff:/var/lib/docker/overlay2/438660a3bbbc35bff890f07029ce43b51006aa7672592e2474721b86d466905b/diff",
"MergedDir": "/var/lib/docker/overlay2/729fddbdd51de18b1d80fbfbb0e03fea5a6c4b3b58ef10c9fc1272371176757a/merged",
"UpperDir": "/var/lib/docker/overlay2/729fddbdd51de18b1d80fbfbb0e03fea5a6c4b3b58ef10c9fc1272371176757a/diff",
"WorkDir": "/var/lib/docker/overlay2/729fddbdd51de18b1d80fbfbb0e03fea5a6c4b3b58ef10c9fc1272371176757a/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "old-k8s-version-368787",
"Source": "/var/lib/docker/volumes/old-k8s-version-368787/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "old-k8s-version-368787",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-368787",
"name.minikube.sigs.k8s.io": "old-k8s-version-368787",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "28b72d80bcb33e3bd32ecc0ef53a2eea2452efad336a6f8f183b5299baafc8df",
"SandboxKey": "/var/run/docker/netns/28b72d80bcb3",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35304"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35305"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35308"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35306"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35307"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-368787": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "394804f4b2b3ec80d8f10c173dead534a044bceba117e946f47c8188d66bbc41",
"EndpointID": "74e1e1a0c1fa2a6b4c7f84eea92f4b45a30ad475d47d0cfc7bb454892fa0c2d2",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-368787",
"7dcafe5f5b3f"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-368787 -n old-k8s-version-368787
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-368787 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-368787 logs -n 25: (2.805980027s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-335477 | cert-expiration-335477 | jenkins | v1.34.0 | 26 Oct 24 01:27 UTC | 26 Oct 24 01:28 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-968413 | force-systemd-env-968413 | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-968413 | force-systemd-env-968413 | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
| start | -p cert-options-712326 | cert-options-712326 | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-712326 ssh | cert-options-712326 | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-712326 -- sudo | cert-options-712326 | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-712326 | cert-options-712326 | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
| start | -p old-k8s-version-368787 | old-k8s-version-368787 | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:31 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-335477 | cert-expiration-335477 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:31 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-335477 | cert-expiration-335477 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:31 UTC |
| start | -p | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:32 UTC |
| | default-k8s-diff-port-314480 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-368787 | old-k8s-version-368787 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:31 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-368787 | old-k8s-version-368787 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:32 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-368787 | old-k8s-version-368787 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:32 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-368787 | old-k8s-version-368787 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p default-k8s-diff-port-314480 | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:32 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:32 UTC |
| | default-k8s-diff-port-314480 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-314480 | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:32 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:37 UTC |
| | default-k8s-diff-port-314480 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| image | default-k8s-diff-port-314480 | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
| | image list --format=json | | | | | |
| pause | -p | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
| | default-k8s-diff-port-314480 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
| | default-k8s-diff-port-314480 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
| | default-k8s-diff-port-314480 | | | | | |
| delete | -p | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
| | default-k8s-diff-port-314480 | | | | | |
| start | -p embed-certs-892584 | embed-certs-892584 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/10/26 01:37:39
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.23.2 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1026 01:37:39.366044 2083289 out.go:345] Setting OutFile to fd 1 ...
I1026 01:37:39.366275 2083289 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 01:37:39.366303 2083289 out.go:358] Setting ErrFile to fd 2...
I1026 01:37:39.366328 2083289 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 01:37:39.366603 2083289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
I1026 01:37:39.367097 2083289 out.go:352] Setting JSON to false
I1026 01:37:39.368165 2083289 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":33610,"bootTime":1729873050,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1026 01:37:39.368345 2083289 start.go:139] virtualization:
I1026 01:37:39.371050 2083289 out.go:177] * [embed-certs-892584] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1026 01:37:39.373115 2083289 out.go:177] - MINIKUBE_LOCATION=19868
I1026 01:37:39.373203 2083289 notify.go:220] Checking for updates...
I1026 01:37:39.377234 2083289 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1026 01:37:39.379109 2083289 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
I1026 01:37:39.380969 2083289 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
I1026 01:37:39.382869 2083289 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1026 01:37:39.385168 2083289 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1026 01:37:39.387487 2083289 config.go:182] Loaded profile config "old-k8s-version-368787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1026 01:37:39.387597 2083289 driver.go:394] Setting default libvirt URI to qemu:///system
I1026 01:37:39.412884 2083289 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1026 01:37:39.413032 2083289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1026 01:37:39.484598 2083289 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:37:39.473684666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1026 01:37:39.484720 2083289 docker.go:318] overlay module found
I1026 01:37:39.486825 2083289 out.go:177] * Using the docker driver based on user configuration
I1026 01:37:39.488729 2083289 start.go:297] selected driver: docker
I1026 01:37:39.488748 2083289 start.go:901] validating driver "docker" against <nil>
I1026 01:37:39.488763 2083289 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1026 01:37:39.489516 2083289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1026 01:37:39.541410 2083289 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:37:39.531208758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1026 01:37:39.541621 2083289 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1026 01:37:39.541862 2083289 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1026 01:37:39.544391 2083289 out.go:177] * Using Docker driver with root privileges
I1026 01:37:39.546618 2083289 cni.go:84] Creating CNI manager for ""
I1026 01:37:39.546686 2083289 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1026 01:37:39.546704 2083289 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I1026 01:37:39.546789 2083289 start.go:340] cluster config:
{Name:embed-certs-892584 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1026 01:37:39.549015 2083289 out.go:177] * Starting "embed-certs-892584" primary control-plane node in "embed-certs-892584" cluster
I1026 01:37:39.550806 2083289 cache.go:121] Beginning downloading kic base image for docker with containerd
I1026 01:37:39.552880 2083289 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
I1026 01:37:39.555068 2083289 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1026 01:37:39.555128 2083289 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
I1026 01:37:39.555164 2083289 cache.go:56] Caching tarball of preloaded images
I1026 01:37:39.555158 2083289 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
I1026 01:37:39.555250 2083289 preload.go:172] Found /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1026 01:37:39.555260 2083289 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
I1026 01:37:39.555478 2083289 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/config.json ...
I1026 01:37:39.555568 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/config.json: {Name:mk779949728dad0ca65fc40f5c31f9b716a262de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:37:39.574544 2083289 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
I1026 01:37:39.574570 2083289 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
I1026 01:37:39.574585 2083289 cache.go:194] Successfully downloaded all kic artifacts
I1026 01:37:39.574608 2083289 start.go:360] acquireMachinesLock for embed-certs-892584: {Name:mk4b48d59e38b37e589663d987ea35cd2a3247dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1026 01:37:39.574714 2083289 start.go:364] duration metric: took 86.828µs to acquireMachinesLock for "embed-certs-892584"
I1026 01:37:39.574758 2083289 start.go:93] Provisioning new machine with config: &{Name:embed-certs-892584 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1026 01:37:39.574837 2083289 start.go:125] createHost starting for "" (driver="docker")
I1026 01:37:38.973325 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:40.978365 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:43.474016 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:39.579441 2083289 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I1026 01:37:39.579702 2083289 start.go:159] libmachine.API.Create for "embed-certs-892584" (driver="docker")
I1026 01:37:39.579747 2083289 client.go:168] LocalClient.Create starting
I1026 01:37:39.579824 2083289 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem
I1026 01:37:39.579866 2083289 main.go:141] libmachine: Decoding PEM data...
I1026 01:37:39.579885 2083289 main.go:141] libmachine: Parsing certificate...
I1026 01:37:39.579948 2083289 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem
I1026 01:37:39.579971 2083289 main.go:141] libmachine: Decoding PEM data...
I1026 01:37:39.579990 2083289 main.go:141] libmachine: Parsing certificate...
I1026 01:37:39.580375 2083289 cli_runner.go:164] Run: docker network inspect embed-certs-892584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1026 01:37:39.596561 2083289 cli_runner.go:211] docker network inspect embed-certs-892584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1026 01:37:39.596649 2083289 network_create.go:284] running [docker network inspect embed-certs-892584] to gather additional debugging logs...
I1026 01:37:39.596670 2083289 cli_runner.go:164] Run: docker network inspect embed-certs-892584
W1026 01:37:39.614879 2083289 cli_runner.go:211] docker network inspect embed-certs-892584 returned with exit code 1
I1026 01:37:39.614911 2083289 network_create.go:287] error running [docker network inspect embed-certs-892584]: docker network inspect embed-certs-892584: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-892584 not found
I1026 01:37:39.614932 2083289 network_create.go:289] output of [docker network inspect embed-certs-892584]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-892584 not found
** /stderr **
I1026 01:37:39.615044 2083289 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1026 01:37:39.633774 2083289 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b80904004ad6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8f:a1:c9:9e} reservation:<nil>}
I1026 01:37:39.634431 2083289 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2dec2bba0dc7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:57:02:36:e1} reservation:<nil>}
I1026 01:37:39.635160 2083289 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b1c506f42330 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:70:55:89:c3} reservation:<nil>}
I1026 01:37:39.635751 2083289 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-394804f4b2b3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:c0:39:57:49} reservation:<nil>}
I1026 01:37:39.636452 2083289 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a10830}
I1026 01:37:39.636486 2083289 network_create.go:124] attempt to create docker network embed-certs-892584 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1026 01:37:39.636579 2083289 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-892584 embed-certs-892584
I1026 01:37:39.724688 2083289 network_create.go:108] docker network embed-certs-892584 192.168.85.0/24 created
I1026 01:37:39.724725 2083289 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-892584" container
I1026 01:37:39.724814 2083289 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1026 01:37:39.744469 2083289 cli_runner.go:164] Run: docker volume create embed-certs-892584 --label name.minikube.sigs.k8s.io=embed-certs-892584 --label created_by.minikube.sigs.k8s.io=true
I1026 01:37:39.761304 2083289 oci.go:103] Successfully created a docker volume embed-certs-892584
I1026 01:37:39.761409 2083289 cli_runner.go:164] Run: docker run --rm --name embed-certs-892584-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-892584 --entrypoint /usr/bin/test -v embed-certs-892584:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib
I1026 01:37:40.485625 2083289 oci.go:107] Successfully prepared a docker volume embed-certs-892584
I1026 01:37:40.485683 2083289 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1026 01:37:40.485704 2083289 kic.go:194] Starting extracting preloaded images to volume ...
I1026 01:37:40.485782 2083289 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-892584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir
I1026 01:37:45.475440 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:47.476030 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
I1026 01:37:44.952039 2083289 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-892584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir: (4.466213841s)
I1026 01:37:44.952085 2083289 kic.go:203] duration metric: took 4.466377741s to extract preloaded images to volume ...
W1026 01:37:44.952287 2083289 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1026 01:37:44.952439 2083289 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1026 01:37:45.084690 2083289 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-892584 --name embed-certs-892584 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-892584 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-892584 --network embed-certs-892584 --ip 192.168.85.2 --volume embed-certs-892584:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e
I1026 01:37:45.613009 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Running}}
I1026 01:37:45.632128 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
I1026 01:37:45.655571 2083289 cli_runner.go:164] Run: docker exec embed-certs-892584 stat /var/lib/dpkg/alternatives/iptables
I1026 01:37:45.719287 2083289 oci.go:144] the created container "embed-certs-892584" has a running status.
I1026 01:37:45.719353 2083289 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa...
I1026 01:37:46.461894 2083289 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1026 01:37:46.505189 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
I1026 01:37:46.533159 2083289 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1026 01:37:46.533178 2083289 kic_runner.go:114] Args: [docker exec --privileged embed-certs-892584 chown docker:docker /home/docker/.ssh/authorized_keys]
I1026 01:37:46.619570 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
I1026 01:37:46.637355 2083289 machine.go:93] provisionDockerMachine start ...
I1026 01:37:46.637455 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
I1026 01:37:46.655818 2083289 main.go:141] libmachine: Using SSH client type: native
I1026 01:37:46.656127 2083289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 35314 <nil> <nil>}
I1026 01:37:46.656145 2083289 main.go:141] libmachine: About to run SSH command:
hostname
I1026 01:37:46.795475 2083289 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-892584
I1026 01:37:46.795503 2083289 ubuntu.go:169] provisioning hostname "embed-certs-892584"
I1026 01:37:46.795592 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
I1026 01:37:46.814914 2083289 main.go:141] libmachine: Using SSH client type: native
I1026 01:37:46.815168 2083289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 35314 <nil> <nil>}
I1026 01:37:46.815188 2083289 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-892584 && echo "embed-certs-892584" | sudo tee /etc/hostname
I1026 01:37:46.985337 2083289 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-892584
I1026 01:37:46.985427 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
I1026 01:37:47.013204 2083289 main.go:141] libmachine: Using SSH client type: native
I1026 01:37:47.013459 2083289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 35314 <nil> <nil>}
I1026 01:37:47.013484 2083289 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-892584' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-892584/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-892584' | sudo tee -a /etc/hosts;
fi
fi
I1026 01:37:47.155471 2083289 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1026 01:37:47.155541 2083289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19868-1857747/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-1857747/.minikube}
I1026 01:37:47.155591 2083289 ubuntu.go:177] setting up certificates
I1026 01:37:47.155614 2083289 provision.go:84] configureAuth start
I1026 01:37:47.155698 2083289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-892584
I1026 01:37:47.173892 2083289 provision.go:143] copyHostCerts
I1026 01:37:47.173967 2083289 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem, removing ...
I1026 01:37:47.173982 2083289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem
I1026 01:37:47.174061 2083289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem (1078 bytes)
I1026 01:37:47.174424 2083289 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem, removing ...
I1026 01:37:47.174441 2083289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem
I1026 01:37:47.174480 2083289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem (1123 bytes)
I1026 01:37:47.174566 2083289 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem, removing ...
I1026 01:37:47.174572 2083289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem
I1026 01:37:47.174602 2083289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem (1675 bytes)
I1026 01:37:47.174680 2083289 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-892584 san=[127.0.0.1 192.168.85.2 embed-certs-892584 localhost minikube]
I1026 01:37:47.679481 2083289 provision.go:177] copyRemoteCerts
I1026 01:37:47.679551 2083289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1026 01:37:47.679599 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
I1026 01:37:47.696244 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
I1026 01:37:47.793250 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1026 01:37:47.819673 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1026 01:37:47.844892 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1026 01:37:47.869236 2083289 provision.go:87] duration metric: took 713.596584ms to configureAuth
I1026 01:37:47.869262 2083289 ubuntu.go:193] setting minikube options for container-runtime
I1026 01:37:47.869451 2083289 config.go:182] Loaded profile config "embed-certs-892584": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 01:37:47.869459 2083289 machine.go:96] duration metric: took 1.232081114s to provisionDockerMachine
I1026 01:37:47.869465 2083289 client.go:171] duration metric: took 8.289708081s to LocalClient.Create
I1026 01:37:47.869488 2083289 start.go:167] duration metric: took 8.289786899s to libmachine.API.Create "embed-certs-892584"
I1026 01:37:47.869498 2083289 start.go:293] postStartSetup for "embed-certs-892584" (driver="docker")
I1026 01:37:47.869507 2083289 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1026 01:37:47.869562 2083289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1026 01:37:47.869603 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
I1026 01:37:47.887902 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
I1026 01:37:47.988607 2083289 ssh_runner.go:195] Run: cat /etc/os-release
I1026 01:37:47.992000 2083289 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1026 01:37:47.992037 2083289 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1026 01:37:47.992048 2083289 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1026 01:37:47.992055 2083289 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1026 01:37:47.992069 2083289 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-1857747/.minikube/addons for local assets ...
I1026 01:37:47.992135 2083289 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-1857747/.minikube/files for local assets ...
I1026 01:37:47.992214 2083289 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem -> 18643732.pem in /etc/ssl/certs
I1026 01:37:47.992323 2083289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1026 01:37:48.002636 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem --> /etc/ssl/certs/18643732.pem (1708 bytes)
I1026 01:37:48.035498 2083289 start.go:296] duration metric: took 165.984021ms for postStartSetup
I1026 01:37:48.035962 2083289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-892584
I1026 01:37:48.060588 2083289 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/config.json ...
I1026 01:37:48.060905 2083289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1026 01:37:48.060958 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
I1026 01:37:48.080090 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
I1026 01:37:48.176629 2083289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1026 01:37:48.181852 2083289 start.go:128] duration metric: took 8.606998108s to createHost
I1026 01:37:48.181877 2083289 start.go:83] releasing machines lock for "embed-certs-892584", held for 8.607148674s
I1026 01:37:48.181954 2083289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-892584
I1026 01:37:48.198709 2083289 ssh_runner.go:195] Run: cat /version.json
I1026 01:37:48.198762 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
I1026 01:37:48.198999 2083289 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1026 01:37:48.199072 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
I1026 01:37:48.215393 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
I1026 01:37:48.233594 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
I1026 01:37:48.303172 2083289 ssh_runner.go:195] Run: systemctl --version
I1026 01:37:48.468633 2083289 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1026 01:37:48.475581 2083289 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1026 01:37:48.501613 2083289 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1026 01:37:48.501697 2083289 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1026 01:37:48.532698 2083289 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I1026 01:37:48.532723 2083289 start.go:495] detecting cgroup driver to use...
I1026 01:37:48.532756 2083289 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1026 01:37:48.532808 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1026 01:37:48.545587 2083289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1026 01:37:48.557746 2083289 docker.go:217] disabling cri-docker service (if available) ...
I1026 01:37:48.557816 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1026 01:37:48.571885 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1026 01:37:48.587883 2083289 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1026 01:37:48.682685 2083289 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1026 01:37:48.776738 2083289 docker.go:233] disabling docker service ...
I1026 01:37:48.776841 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1026 01:37:48.800961 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1026 01:37:48.813232 2083289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1026 01:37:48.899310 2083289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1026 01:37:49.008549 2083289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1026 01:37:49.021756 2083289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1026 01:37:49.044890 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1026 01:37:49.057489 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1026 01:37:49.069891 2083289 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1026 01:37:49.069994 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1026 01:37:49.082119 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1026 01:37:49.094646 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1026 01:37:49.105712 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1026 01:37:49.119169 2083289 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1026 01:37:49.128521 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1026 01:37:49.140876 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1026 01:37:49.154741 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1026 01:37:49.166815 2083289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1026 01:37:49.177951 2083289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1026 01:37:49.192118 2083289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1026 01:37:49.315011 2083289 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1026 01:37:49.517180 2083289 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1026 01:37:49.517286 2083289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1026 01:37:49.521791 2083289 start.go:563] Will wait 60s for crictl version
I1026 01:37:49.521891 2083289 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.525986 2083289 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1026 01:37:49.598803 2083289 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1026 01:37:49.598903 2083289 ssh_runner.go:195] Run: containerd --version
I1026 01:37:49.631393 2083289 ssh_runner.go:195] Run: containerd --version
I1026 01:37:49.672197 2083289 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
I1026 01:37:49.674262 2083289 cli_runner.go:164] Run: docker network inspect embed-certs-892584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1026 01:37:49.702595 2083289 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1026 01:37:49.707901 2083289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1026 01:37:49.724910 2083289 kubeadm.go:883] updating cluster {Name:embed-certs-892584 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1026 01:37:49.725074 2083289 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1026 01:37:49.725154 2083289 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 01:37:49.810836 2083289 containerd.go:627] all images are preloaded for containerd runtime.
I1026 01:37:49.810944 2083289 containerd.go:534] Images already preloaded, skipping extraction
I1026 01:37:49.811079 2083289 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 01:37:49.877764 2083289 containerd.go:627] all images are preloaded for containerd runtime.
I1026 01:37:49.877786 2083289 cache_images.go:84] Images are preloaded, skipping loading
I1026 01:37:49.877793 2083289 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.2 containerd true true} ...
I1026 01:37:49.877900 2083289 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-892584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1026 01:37:49.877971 2083289 ssh_runner.go:195] Run: sudo crictl info
I1026 01:37:49.938813 2083289 cni.go:84] Creating CNI manager for ""
I1026 01:37:49.938886 2083289 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1026 01:37:49.938921 2083289 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1026 01:37:49.938982 2083289 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-892584 NodeName:embed-certs-892584 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1026 01:37:49.939166 2083289 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-892584"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1026 01:37:49.939304 2083289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1026 01:37:49.960744 2083289 binaries.go:44] Found k8s binaries, skipping transfer
I1026 01:37:49.960881 2083289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1026 01:37:49.974039 2083289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I1026 01:37:49.993345 2083289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1026 01:37:50.018112 2083289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I1026 01:37:50.049463 2083289 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1026 01:37:50.053790 2083289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1026 01:37:50.069678 2083289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1026 01:37:50.201876 2083289 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1026 01:37:50.216932 2083289 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584 for IP: 192.168.85.2
I1026 01:37:50.217009 2083289 certs.go:194] generating shared ca certs ...
I1026 01:37:50.217043 2083289 certs.go:226] acquiring lock for ca certs: {Name:mkcea56562cecb76fcc8b6004959524ff574e9b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:37:50.217272 2083289 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.key
I1026 01:37:50.217353 2083289 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.key
I1026 01:37:50.217388 2083289 certs.go:256] generating profile certs ...
I1026 01:37:50.217488 2083289 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.key
I1026 01:37:50.217522 2083289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.crt with IP's: []
I1026 01:37:50.831745 2083289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.crt ...
I1026 01:37:50.831858 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.crt: {Name:mk231d5785b52be9398c1cd11c69cb093a17dc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:37:50.832108 2083289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.key ...
I1026 01:37:50.832172 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.key: {Name:mk63457e82094ff3f2b63a9f1b335d0baeaf01a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:37:50.832799 2083289 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key.b5a2e078
I1026 01:37:50.832867 2083289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt.b5a2e078 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1026 01:37:51.315630 2083289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt.b5a2e078 ...
I1026 01:37:51.315726 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt.b5a2e078: {Name:mkbfc441fec043b099535fe54c9453350d9e1e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:37:51.316427 2083289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key.b5a2e078 ...
I1026 01:37:51.316474 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key.b5a2e078: {Name:mk90ba137d89d0cae34618f463703703a6c235d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:37:51.316630 2083289 certs.go:381] copying /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt.b5a2e078 -> /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt
I1026 01:37:51.316763 2083289 certs.go:385] copying /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key.b5a2e078 -> /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key
I1026 01:37:51.316869 2083289 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.key
I1026 01:37:51.316906 2083289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.crt with IP's: []
I1026 01:37:51.738864 2083289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.crt ...
I1026 01:37:51.738899 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.crt: {Name:mk2dea1ddde2f81e3c925cc3f4e1f3443347385f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:37:51.739545 2083289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.key ...
I1026 01:37:51.739563 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.key: {Name:mka770a543cd600bbccfe52856f0b475fa9e82da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:37:51.739770 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373.pem (1338 bytes)
W1026 01:37:51.739815 2083289 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373_empty.pem, impossibly tiny 0 bytes
I1026 01:37:51.739834 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem (1679 bytes)
I1026 01:37:51.739859 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem (1078 bytes)
I1026 01:37:51.739885 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem (1123 bytes)
I1026 01:37:51.739910 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem (1675 bytes)
I1026 01:37:51.739956 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem (1708 bytes)
I1026 01:37:51.740596 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1026 01:37:51.767431 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1026 01:37:51.794866 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1026 01:37:51.821067 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1026 01:37:51.846652 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I1026 01:37:51.871596 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1026 01:37:51.896887 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1026 01:37:51.932056 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1026 01:37:51.960375 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373.pem --> /usr/share/ca-certificates/1864373.pem (1338 bytes)
I1026 01:37:51.994124 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem --> /usr/share/ca-certificates/18643732.pem (1708 bytes)
I1026 01:37:52.030165 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1026 01:37:52.058438 2083289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1026 01:37:52.077514 2083289 ssh_runner.go:195] Run: openssl version
I1026 01:37:52.086511 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1864373.pem && ln -fs /usr/share/ca-certificates/1864373.pem /etc/ssl/certs/1864373.pem"
I1026 01:37:52.097803 2083289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1864373.pem
I1026 01:37:52.101748 2083289 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:51 /usr/share/ca-certificates/1864373.pem
I1026 01:37:52.101889 2083289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1864373.pem
I1026 01:37:52.115147 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1864373.pem /etc/ssl/certs/51391683.0"
I1026 01:37:52.124952 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18643732.pem && ln -fs /usr/share/ca-certificates/18643732.pem /etc/ssl/certs/18643732.pem"
I1026 01:37:52.134472 2083289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18643732.pem
I1026 01:37:52.138151 2083289 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:51 /usr/share/ca-certificates/18643732.pem
I1026 01:37:52.138257 2083289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18643732.pem
I1026 01:37:52.145360 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18643732.pem /etc/ssl/certs/3ec20f2e.0"
I1026 01:37:52.154996 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1026 01:37:52.164524 2083289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1026 01:37:52.168090 2083289 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
I1026 01:37:52.168178 2083289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1026 01:37:52.175455 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1026 01:37:52.184952 2083289 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1026 01:37:52.188791 2083289 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1026 01:37:52.188843 2083289 kubeadm.go:392] StartCluster: {Name:embed-certs-892584 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1026 01:37:52.188931 2083289 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1026 01:37:52.188991 2083289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1026 01:37:52.233565 2083289 cri.go:89] found id: ""
I1026 01:37:52.233709 2083289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1026 01:37:52.242963 2083289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1026 01:37:52.253243 2083289 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1026 01:37:52.253315 2083289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1026 01:37:52.262290 2083289 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1026 01:37:52.262363 2083289 kubeadm.go:157] found existing configuration files:
I1026 01:37:52.262436 2083289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1026 01:37:52.271182 2083289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1026 01:37:52.271276 2083289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1026 01:37:52.280164 2083289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1026 01:37:52.289372 2083289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1026 01:37:52.289462 2083289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1026 01:37:52.298266 2083289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1026 01:37:52.308092 2083289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1026 01:37:52.308183 2083289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1026 01:37:52.317501 2083289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1026 01:37:52.326786 2083289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1026 01:37:52.326908 2083289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1026 01:37:52.335564 2083289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1026 01:37:52.407097 2083289 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
I1026 01:37:52.407264 2083289 kubeadm.go:310] [preflight] Running pre-flight checks
I1026 01:37:52.430200 2083289 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I1026 01:37:52.430309 2083289 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1071-aws[0m
I1026 01:37:52.430369 2083289 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I1026 01:37:52.430446 2083289 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1026 01:37:52.430526 2083289 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1026 01:37:52.430602 2083289 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1026 01:37:52.430674 2083289 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1026 01:37:52.430749 2083289 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1026 01:37:52.430826 2083289 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1026 01:37:52.430900 2083289 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1026 01:37:52.431017 2083289 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1026 01:37:52.431092 2083289 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1026 01:37:52.502421 2083289 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1026 01:37:52.502538 2083289 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1026 01:37:52.502635 2083289 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1026 01:37:52.511740 2083289 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1026 01:37:48.981712 2073170 pod_ready.go:82] duration metric: took 4m0.014058258s for pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace to be "Ready" ...
E1026 01:37:48.981744 2073170 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1026 01:37:48.981801 2073170 pod_ready.go:39] duration metric: took 5m20.850945581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1026 01:37:48.981824 2073170 api_server.go:52] waiting for apiserver process to appear ...
I1026 01:37:48.981925 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1026 01:37:48.982046 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1026 01:37:49.061661 2073170 cri.go:89] found id: "caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
I1026 01:37:49.061738 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:37:49.061758 2073170 cri.go:89] found id: ""
I1026 01:37:49.061783 2073170 logs.go:282] 2 containers: [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d]
I1026 01:37:49.061874 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.066064 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.070465 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1026 01:37:49.070527 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1026 01:37:49.152162 2073170 cri.go:89] found id: "3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
I1026 01:37:49.152183 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:37:49.152189 2073170 cri.go:89] found id: ""
I1026 01:37:49.152196 2073170 logs.go:282] 2 containers: [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272]
I1026 01:37:49.152250 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.157843 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.161728 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1026 01:37:49.161874 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1026 01:37:49.213678 2073170 cri.go:89] found id: "c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
I1026 01:37:49.213756 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:37:49.213776 2073170 cri.go:89] found id: ""
I1026 01:37:49.213800 2073170 logs.go:282] 2 containers: [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e]
I1026 01:37:49.213885 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.220177 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.232203 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1026 01:37:49.232345 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1026 01:37:49.294557 2073170 cri.go:89] found id: "9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
I1026 01:37:49.294645 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:37:49.294665 2073170 cri.go:89] found id: ""
I1026 01:37:49.294689 2073170 logs.go:282] 2 containers: [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7]
I1026 01:37:49.294782 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.299146 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.303215 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1026 01:37:49.303357 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1026 01:37:49.350569 2073170 cri.go:89] found id: "f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
I1026 01:37:49.350646 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:37:49.350668 2073170 cri.go:89] found id: ""
I1026 01:37:49.350691 2073170 logs.go:282] 2 containers: [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670]
I1026 01:37:49.350780 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.356495 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.360987 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1026 01:37:49.361095 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1026 01:37:49.416682 2073170 cri.go:89] found id: "407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
I1026 01:37:49.416758 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:37:49.416778 2073170 cri.go:89] found id: ""
I1026 01:37:49.416800 2073170 logs.go:282] 2 containers: [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526]
I1026 01:37:49.416889 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.421667 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.425830 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1026 01:37:49.425971 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1026 01:37:49.476562 2073170 cri.go:89] found id: "19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
I1026 01:37:49.476639 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:37:49.476670 2073170 cri.go:89] found id: ""
I1026 01:37:49.476691 2073170 logs.go:282] 2 containers: [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b]
I1026 01:37:49.476777 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.481392 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.485639 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1026 01:37:49.485779 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1026 01:37:49.536284 2073170 cri.go:89] found id: "f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
I1026 01:37:49.536306 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:37:49.536312 2073170 cri.go:89] found id: ""
I1026 01:37:49.536320 2073170 logs.go:282] 2 containers: [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad]
I1026 01:37:49.536379 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.540772 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.545367 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1026 01:37:49.545440 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1026 01:37:49.595865 2073170 cri.go:89] found id: "ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
I1026 01:37:49.595886 2073170 cri.go:89] found id: ""
I1026 01:37:49.595894 2073170 logs.go:282] 1 containers: [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125]
I1026 01:37:49.595953 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:37:49.606230 2073170 logs.go:123] Gathering logs for coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] ...
I1026 01:37:49.606256 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:37:49.660000 2073170 logs.go:123] Gathering logs for kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] ...
I1026 01:37:49.660082 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
I1026 01:37:49.717276 2073170 logs.go:123] Gathering logs for kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] ...
I1026 01:37:49.717309 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:37:49.815045 2073170 logs.go:123] Gathering logs for kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] ...
I1026 01:37:49.815084 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:37:49.932109 2073170 logs.go:123] Gathering logs for kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] ...
I1026 01:37:49.932149 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
I1026 01:37:50.002376 2073170 logs.go:123] Gathering logs for kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] ...
I1026 01:37:50.002417 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:37:50.059980 2073170 logs.go:123] Gathering logs for kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] ...
I1026 01:37:50.060057 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:37:50.142243 2073170 logs.go:123] Gathering logs for container status ...
I1026 01:37:50.142278 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1026 01:37:50.273887 2073170 logs.go:123] Gathering logs for kubelet ...
I1026 01:37:50.273926 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1026 01:37:50.400116 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142066 658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-44wvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-44wvw" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.400368 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142157 658 reflector.go:138] object-"kube-system"/"metrics-server-token-7tsjh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7tsjh" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.400590 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142205 658 reflector.go:138] object-"kube-system"/"coredns-token-n94ql": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n94ql" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.400798 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142249 658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.401019 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142293 658 reflector.go:138] object-"kube-system"/"kube-proxy-token-47vp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-47vp6" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.401237 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142333 658 reflector.go:138] object-"kube-system"/"kindnet-token-qqrpm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qqrpm" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.401445 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142465 658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.401657 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142549 658 reflector.go:138] object-"default"/"default-token-2jcx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2jcx9" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:37:50.409687 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.113479 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.411310 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.907637 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.414173 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:44 old-k8s-version-368787 kubelet[658]: E1026 01:32:44.742023 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.416401 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:56 old-k8s-version-368787 kubelet[658]: E1026 01:32:56.075801 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.416745 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.080022 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.416936 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.735904 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.417608 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:01 old-k8s-version-368787 kubelet[658]: E1026 01:33:01.507025 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.420461 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:11 old-k8s-version-368787 kubelet[658]: E1026 01:33:11.743711 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.421064 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:17 old-k8s-version-368787 kubelet[658]: E1026 01:33:17.172767 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.421398 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:21 old-k8s-version-368787 kubelet[658]: E1026 01:33:21.507449 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.421588 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:24 old-k8s-version-368787 kubelet[658]: E1026 01:33:24.731672 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.421924 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:32 old-k8s-version-368787 kubelet[658]: E1026 01:33:32.731878 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.422114 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:37 old-k8s-version-368787 kubelet[658]: E1026 01:33:37.731969 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.422719 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:46 old-k8s-version-368787 kubelet[658]: E1026 01:33:46.262782 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.422908 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:48 old-k8s-version-368787 kubelet[658]: E1026 01:33:48.732324 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.423246 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:51 old-k8s-version-368787 kubelet[658]: E1026 01:33:51.507083 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.425832 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:03 old-k8s-version-368787 kubelet[658]: E1026 01:34:03.750208 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.426182 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:06 old-k8s-version-368787 kubelet[658]: E1026 01:34:06.731790 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.426374 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:18 old-k8s-version-368787 kubelet[658]: E1026 01:34:18.732360 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.426713 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:21 old-k8s-version-368787 kubelet[658]: E1026 01:34:21.731670 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.426907 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:33 old-k8s-version-368787 kubelet[658]: E1026 01:34:33.732041 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.427535 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:37 old-k8s-version-368787 kubelet[658]: E1026 01:34:37.414157 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.427870 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:41 old-k8s-version-368787 kubelet[658]: E1026 01:34:41.507110 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.428130 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:48 old-k8s-version-368787 kubelet[658]: E1026 01:34:48.731821 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.428468 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:54 old-k8s-version-368787 kubelet[658]: E1026 01:34:54.731233 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.428662 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:59 old-k8s-version-368787 kubelet[658]: E1026 01:34:59.732434 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.428993 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:06 old-k8s-version-368787 kubelet[658]: E1026 01:35:06.731705 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.429180 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:12 old-k8s-version-368787 kubelet[658]: E1026 01:35:12.731827 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.429561 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:19 old-k8s-version-368787 kubelet[658]: E1026 01:35:19.732195 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.432106 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:25 old-k8s-version-368787 kubelet[658]: E1026 01:35:25.742123 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:37:50.432445 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:33 old-k8s-version-368787 kubelet[658]: E1026 01:35:33.731192 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.432634 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:40 old-k8s-version-368787 kubelet[658]: E1026 01:35:40.736836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.432982 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:48 old-k8s-version-368787 kubelet[658]: E1026 01:35:48.731218 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.433171 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:53 old-k8s-version-368787 kubelet[658]: E1026 01:35:53.733617 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.433771 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:03 old-k8s-version-368787 kubelet[658]: E1026 01:36:03.662574 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.433959 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:06 old-k8s-version-368787 kubelet[658]: E1026 01:36:06.731650 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.434293 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:11 old-k8s-version-368787 kubelet[658]: E1026 01:36:11.507131 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.434525 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:21 old-k8s-version-368787 kubelet[658]: E1026 01:36:21.731783 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.434861 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:23 old-k8s-version-368787 kubelet[658]: E1026 01:36:23.731690 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.435204 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:34 old-k8s-version-368787 kubelet[658]: E1026 01:36:34.731309 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.435398 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.435735 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.435924 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.436258 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.436447 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.436780 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.436968 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.437304 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.437492 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:50.437824 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:50.438014 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1026 01:37:50.438025 2073170 logs.go:123] Gathering logs for dmesg ...
I1026 01:37:50.438040 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1026 01:37:50.454757 2073170 logs.go:123] Gathering logs for describe nodes ...
I1026 01:37:50.454785 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1026 01:37:50.669583 2073170 logs.go:123] Gathering logs for containerd ...
I1026 01:37:50.669862 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1026 01:37:50.736640 2073170 logs.go:123] Gathering logs for coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] ...
I1026 01:37:50.736718 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
I1026 01:37:50.791237 2073170 logs.go:123] Gathering logs for kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] ...
I1026 01:37:50.791266 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
I1026 01:37:50.860038 2073170 logs.go:123] Gathering logs for storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] ...
I1026 01:37:50.860076 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:37:50.936359 2073170 logs.go:123] Gathering logs for kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] ...
I1026 01:37:50.936407 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:37:51.078999 2073170 logs.go:123] Gathering logs for kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] ...
I1026 01:37:51.079039 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
I1026 01:37:51.197002 2073170 logs.go:123] Gathering logs for storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] ...
I1026 01:37:51.197040 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
I1026 01:37:51.270252 2073170 logs.go:123] Gathering logs for kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] ...
I1026 01:37:51.270281 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
I1026 01:37:51.351708 2073170 logs.go:123] Gathering logs for kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] ...
I1026 01:37:51.351739 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
I1026 01:37:51.428214 2073170 logs.go:123] Gathering logs for etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] ...
I1026 01:37:51.428289 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
I1026 01:37:51.480860 2073170 logs.go:123] Gathering logs for etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] ...
I1026 01:37:51.480949 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:37:51.533094 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:37:51.533165 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1026 01:37:51.533239 2073170 out.go:270] X Problems detected in kubelet:
W1026 01:37:51.533278 2073170 out.go:270] Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:51.533314 2073170 out.go:270] Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:51.533366 2073170 out.go:270] Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:37:51.533403 2073170 out.go:270] Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:37:51.533452 2073170 out.go:270] Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1026 01:37:51.533488 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:37:51.533508 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 01:37:52.514893 2083289 out.go:235] - Generating certificates and keys ...
I1026 01:37:52.515003 2083289 kubeadm.go:310] [certs] Using existing ca certificate authority
I1026 01:37:52.515101 2083289 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I1026 01:37:53.126048 2083289 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I1026 01:37:53.592328 2083289 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I1026 01:37:53.973998 2083289 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I1026 01:37:54.863814 2083289 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I1026 01:37:55.080365 2083289 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I1026 01:37:55.080975 2083289 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-892584 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1026 01:37:55.444224 2083289 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I1026 01:37:55.444527 2083289 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-892584 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1026 01:37:55.877422 2083289 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I1026 01:37:56.552980 2083289 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I1026 01:37:57.272878 2083289 kubeadm.go:310] [certs] Generating "sa" key and public key
I1026 01:37:57.273190 2083289 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1026 01:37:57.941688 2083289 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I1026 01:37:58.397786 2083289 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1026 01:37:58.657250 2083289 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1026 01:37:59.135480 2083289 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1026 01:37:59.572599 2083289 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1026 01:37:59.573142 2083289 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1026 01:37:59.576141 2083289 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1026 01:38:01.535291 2073170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1026 01:38:01.547514 2073170 api_server.go:72] duration metric: took 5m49.774798849s to wait for apiserver process to appear ...
I1026 01:38:01.547541 2073170 api_server.go:88] waiting for apiserver healthz status ...
I1026 01:38:01.547576 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1026 01:38:01.547632 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1026 01:38:01.587732 2073170 cri.go:89] found id: "caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
I1026 01:38:01.587754 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:38:01.587759 2073170 cri.go:89] found id: ""
I1026 01:38:01.587766 2073170 logs.go:282] 2 containers: [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d]
I1026 01:38:01.587828 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.592229 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.595984 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1026 01:38:01.596068 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1026 01:38:01.639841 2073170 cri.go:89] found id: "3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
I1026 01:38:01.639871 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:38:01.639876 2073170 cri.go:89] found id: ""
I1026 01:38:01.639884 2073170 logs.go:282] 2 containers: [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272]
I1026 01:38:01.639994 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.644607 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.648285 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1026 01:38:01.648362 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1026 01:38:01.720748 2073170 cri.go:89] found id: "c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
I1026 01:38:01.720774 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:38:01.720780 2073170 cri.go:89] found id: ""
I1026 01:38:01.720787 2073170 logs.go:282] 2 containers: [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e]
I1026 01:38:01.720846 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.726066 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.732857 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1026 01:38:01.732992 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1026 01:38:01.814967 2073170 cri.go:89] found id: "9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
I1026 01:38:01.814997 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:38:01.815005 2073170 cri.go:89] found id: ""
I1026 01:38:01.815012 2073170 logs.go:282] 2 containers: [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7]
I1026 01:38:01.815203 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.819665 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.826464 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1026 01:38:01.826610 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1026 01:38:01.897678 2073170 cri.go:89] found id: "f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
I1026 01:38:01.897708 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:38:01.897714 2073170 cri.go:89] found id: ""
I1026 01:38:01.897727 2073170 logs.go:282] 2 containers: [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670]
I1026 01:38:01.897878 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.922934 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:01.928999 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1026 01:38:01.929123 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1026 01:38:02.046457 2073170 cri.go:89] found id: "407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
I1026 01:38:02.046487 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:38:02.046498 2073170 cri.go:89] found id: ""
I1026 01:38:02.046512 2073170 logs.go:282] 2 containers: [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526]
I1026 01:38:02.046624 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.067786 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.076203 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1026 01:38:02.076352 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1026 01:38:02.150567 2073170 cri.go:89] found id: "19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
I1026 01:38:02.150612 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:38:02.150617 2073170 cri.go:89] found id: ""
I1026 01:38:02.150673 2073170 logs.go:282] 2 containers: [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b]
I1026 01:38:02.150774 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.156731 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.163096 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1026 01:38:02.163254 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1026 01:38:02.248045 2073170 cri.go:89] found id: "ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
I1026 01:38:02.248072 2073170 cri.go:89] found id: ""
I1026 01:38:02.248081 2073170 logs.go:282] 1 containers: [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125]
I1026 01:38:02.248231 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.258094 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1026 01:38:02.258253 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1026 01:38:02.359394 2073170 cri.go:89] found id: "f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
I1026 01:38:02.359428 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:38:02.359433 2073170 cri.go:89] found id: ""
I1026 01:38:02.359441 2073170 logs.go:282] 2 containers: [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad]
I1026 01:38:02.359696 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.368425 2073170 ssh_runner.go:195] Run: which crictl
I1026 01:38:02.375386 2073170 logs.go:123] Gathering logs for storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] ...
I1026 01:38:02.375416 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
I1026 01:38:02.483267 2073170 logs.go:123] Gathering logs for dmesg ...
I1026 01:38:02.483431 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1026 01:38:02.539716 2073170 logs.go:123] Gathering logs for kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] ...
I1026 01:38:02.539755 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
I1026 01:38:02.733373 2073170 logs.go:123] Gathering logs for kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] ...
I1026 01:38:02.733425 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
I1026 01:38:02.854359 2073170 logs.go:123] Gathering logs for kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] ...
I1026 01:38:02.854394 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
I1026 01:38:02.955435 2073170 logs.go:123] Gathering logs for kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] ...
I1026 01:38:02.955469 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
I1026 01:38:03.040330 2073170 logs.go:123] Gathering logs for kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] ...
I1026 01:38:03.040364 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
I1026 01:38:03.184875 2073170 logs.go:123] Gathering logs for container status ...
I1026 01:38:03.184928 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1026 01:38:03.308598 2073170 logs.go:123] Gathering logs for kubelet ...
I1026 01:38:03.308637 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1026 01:38:03.395084 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142066 658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-44wvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-44wvw" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.395487 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142157 658 reflector.go:138] object-"kube-system"/"metrics-server-token-7tsjh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7tsjh" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.395746 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142205 658 reflector.go:138] object-"kube-system"/"coredns-token-n94ql": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n94ql" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.395995 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142249 658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.396249 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142293 658 reflector.go:138] object-"kube-system"/"kube-proxy-token-47vp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-47vp6" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.396495 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142333 658 reflector.go:138] object-"kube-system"/"kindnet-token-qqrpm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qqrpm" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.396759 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142465 658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.397012 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142549 658 reflector.go:138] object-"default"/"default-token-2jcx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2jcx9" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-368787' and this object
W1026 01:38:03.405224 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.113479 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.406935 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.907637 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.410090 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:44 old-k8s-version-368787 kubelet[658]: E1026 01:32:44.742023 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.412301 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:56 old-k8s-version-368787 kubelet[658]: E1026 01:32:56.075801 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.412690 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.080022 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.412911 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.735904 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.413709 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:01 old-k8s-version-368787 kubelet[658]: E1026 01:33:01.507025 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.416720 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:11 old-k8s-version-368787 kubelet[658]: E1026 01:33:11.743711 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.417382 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:17 old-k8s-version-368787 kubelet[658]: E1026 01:33:17.172767 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.417786 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:21 old-k8s-version-368787 kubelet[658]: E1026 01:33:21.507449 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.418027 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:24 old-k8s-version-368787 kubelet[658]: E1026 01:33:24.731672 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.418426 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:32 old-k8s-version-368787 kubelet[658]: E1026 01:33:32.731878 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.418683 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:37 old-k8s-version-368787 kubelet[658]: E1026 01:33:37.731969 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.419380 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:46 old-k8s-version-368787 kubelet[658]: E1026 01:33:46.262782 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.419604 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:48 old-k8s-version-368787 kubelet[658]: E1026 01:33:48.732324 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.419986 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:51 old-k8s-version-368787 kubelet[658]: E1026 01:33:51.507083 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.422699 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:03 old-k8s-version-368787 kubelet[658]: E1026 01:34:03.750208 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.423152 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:06 old-k8s-version-368787 kubelet[658]: E1026 01:34:06.731790 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.423380 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:18 old-k8s-version-368787 kubelet[658]: E1026 01:34:18.732360 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.423781 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:21 old-k8s-version-368787 kubelet[658]: E1026 01:34:21.731670 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.423994 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:33 old-k8s-version-368787 kubelet[658]: E1026 01:34:33.732041 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.424632 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:37 old-k8s-version-368787 kubelet[658]: E1026 01:34:37.414157 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.425085 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:41 old-k8s-version-368787 kubelet[658]: E1026 01:34:41.507110 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.425345 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:48 old-k8s-version-368787 kubelet[658]: E1026 01:34:48.731821 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.425722 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:54 old-k8s-version-368787 kubelet[658]: E1026 01:34:54.731233 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.425954 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:59 old-k8s-version-368787 kubelet[658]: E1026 01:34:59.732434 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.426317 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:06 old-k8s-version-368787 kubelet[658]: E1026 01:35:06.731705 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.426524 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:12 old-k8s-version-368787 kubelet[658]: E1026 01:35:12.731827 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.426905 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:19 old-k8s-version-368787 kubelet[658]: E1026 01:35:19.732195 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.429672 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:25 old-k8s-version-368787 kubelet[658]: E1026 01:35:25.742123 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1026 01:38:03.430063 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:33 old-k8s-version-368787 kubelet[658]: E1026 01:35:33.731192 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.430283 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:40 old-k8s-version-368787 kubelet[658]: E1026 01:35:40.736836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.430667 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:48 old-k8s-version-368787 kubelet[658]: E1026 01:35:48.731218 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.430891 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:53 old-k8s-version-368787 kubelet[658]: E1026 01:35:53.733617 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.431531 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:03 old-k8s-version-368787 kubelet[658]: E1026 01:36:03.662574 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.431751 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:06 old-k8s-version-368787 kubelet[658]: E1026 01:36:06.731650 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.432125 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:11 old-k8s-version-368787 kubelet[658]: E1026 01:36:11.507131 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.432342 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:21 old-k8s-version-368787 kubelet[658]: E1026 01:36:21.731783 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.432691 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:23 old-k8s-version-368787 kubelet[658]: E1026 01:36:23.731690 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.433042 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:34 old-k8s-version-368787 kubelet[658]: E1026 01:36:34.731309 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.433355 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.433731 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.433935 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.434295 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.434516 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.434912 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.435166 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.435545 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.435770 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.436139 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.436351 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:03.436716 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:03.436953 2073170 logs.go:138] Found kubelet problem: Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1026 01:38:03.436967 2073170 logs.go:123] Gathering logs for etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] ...
I1026 01:38:03.436992 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
I1026 01:38:03.527806 2073170 logs.go:123] Gathering logs for etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] ...
I1026 01:38:03.527843 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
I1026 01:37:59.578612 2083289 out.go:235] - Booting up control plane ...
I1026 01:37:59.578714 2083289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1026 01:37:59.578795 2083289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1026 01:37:59.579360 2083289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1026 01:37:59.591006 2083289 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1026 01:37:59.597806 2083289 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1026 01:37:59.598119 2083289 kubeadm.go:310] [kubelet-start] Starting the kubelet
I1026 01:37:59.707933 2083289 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1026 01:37:59.708054 2083289 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1026 01:38:01.211210 2083289 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.508303346s
I1026 01:38:01.211297 2083289 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I1026 01:38:03.598581 2073170 logs.go:123] Gathering logs for kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] ...
I1026 01:38:03.598756 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
I1026 01:38:03.677581 2073170 logs.go:123] Gathering logs for containerd ...
I1026 01:38:03.677658 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1026 01:38:03.753106 2073170 logs.go:123] Gathering logs for describe nodes ...
I1026 01:38:03.753195 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1026 01:38:03.997226 2073170 logs.go:123] Gathering logs for coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] ...
I1026 01:38:03.997300 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
I1026 01:38:04.087455 2073170 logs.go:123] Gathering logs for kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] ...
I1026 01:38:04.087550 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
I1026 01:38:04.175664 2073170 logs.go:123] Gathering logs for kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] ...
I1026 01:38:04.175745 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
I1026 01:38:04.270341 2073170 logs.go:123] Gathering logs for kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] ...
I1026 01:38:04.270371 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
I1026 01:38:04.370143 2073170 logs.go:123] Gathering logs for storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] ...
I1026 01:38:04.370175 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
I1026 01:38:04.447078 2073170 logs.go:123] Gathering logs for kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] ...
I1026 01:38:04.447109 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
I1026 01:38:04.545939 2073170 logs.go:123] Gathering logs for coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] ...
I1026 01:38:04.545976 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
I1026 01:38:04.715996 2073170 logs.go:123] Gathering logs for kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] ...
I1026 01:38:04.716021 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
I1026 01:38:04.880261 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:38:04.880333 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1026 01:38:04.880402 2073170 out.go:270] X Problems detected in kubelet:
W1026 01:38:04.880449 2073170 out.go:270] Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:04.880486 2073170 out.go:270] Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:04.880529 2073170 out.go:270] Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1026 01:38:04.880562 2073170 out.go:270] Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
W1026 01:38:04.880596 2073170 out.go:270] Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1026 01:38:04.880641 2073170 out.go:358] Setting ErrFile to fd 2...
I1026 01:38:04.880663 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 01:38:09.713285 2083289 kubeadm.go:310] [api-check] The API server is healthy after 8.501988454s
I1026 01:38:09.742782 2083289 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1026 01:38:09.764250 2083289 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1026 01:38:09.802945 2083289 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I1026 01:38:09.803158 2083289 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-892584 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1026 01:38:09.825295 2083289 kubeadm.go:310] [bootstrap-token] Using token: u6tbb6.u2rwpec4etemhweo
I1026 01:38:09.827578 2083289 out.go:235] - Configuring RBAC rules ...
I1026 01:38:09.827753 2083289 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1026 01:38:09.837608 2083289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1026 01:38:09.850935 2083289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1026 01:38:09.855277 2083289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1026 01:38:09.861058 2083289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1026 01:38:09.868001 2083289 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1026 01:38:10.125197 2083289 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1026 01:38:10.548060 2083289 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I1026 01:38:11.121677 2083289 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I1026 01:38:11.124611 2083289 kubeadm.go:310]
I1026 01:38:11.124704 2083289 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I1026 01:38:11.124720 2083289 kubeadm.go:310]
I1026 01:38:11.124803 2083289 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I1026 01:38:11.124813 2083289 kubeadm.go:310]
I1026 01:38:11.124843 2083289 kubeadm.go:310] mkdir -p $HOME/.kube
I1026 01:38:11.127909 2083289 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1026 01:38:11.128007 2083289 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1026 01:38:11.128042 2083289 kubeadm.go:310]
I1026 01:38:11.128112 2083289 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I1026 01:38:11.128122 2083289 kubeadm.go:310]
I1026 01:38:11.128179 2083289 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I1026 01:38:11.128187 2083289 kubeadm.go:310]
I1026 01:38:11.128253 2083289 kubeadm.go:310] You should now deploy a pod network to the cluster.
I1026 01:38:11.128371 2083289 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1026 01:38:11.128472 2083289 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1026 01:38:11.128484 2083289 kubeadm.go:310]
I1026 01:38:11.128580 2083289 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I1026 01:38:11.128678 2083289 kubeadm.go:310] and service account keys on each node and then running the following as root:
I1026 01:38:11.128686 2083289 kubeadm.go:310]
I1026 01:38:11.128794 2083289 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u6tbb6.u2rwpec4etemhweo \
I1026 01:38:11.128937 2083289 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:312d9d71d8954a92713e020be0abaacd15647d9767bbc020c5ae409bd78f03a2 \
I1026 01:38:11.128981 2083289 kubeadm.go:310] --control-plane
I1026 01:38:11.128993 2083289 kubeadm.go:310]
I1026 01:38:11.129084 2083289 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I1026 01:38:11.129094 2083289 kubeadm.go:310]
I1026 01:38:11.129186 2083289 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u6tbb6.u2rwpec4etemhweo \
I1026 01:38:11.129305 2083289 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:312d9d71d8954a92713e020be0abaacd15647d9767bbc020c5ae409bd78f03a2
I1026 01:38:11.135282 2083289 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-aws\n", err: exit status 1
I1026 01:38:11.135454 2083289 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1026 01:38:11.135486 2083289 cni.go:84] Creating CNI manager for ""
I1026 01:38:11.135497 2083289 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1026 01:38:11.138849 2083289 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1026 01:38:11.140970 2083289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1026 01:38:11.145098 2083289 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
I1026 01:38:11.145119 2083289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I1026 01:38:11.164644 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1026 01:38:11.530096 2083289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1026 01:38:11.530176 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1026 01:38:11.530232 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-892584 minikube.k8s.io/updated_at=2024_10_26T01_38_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=embed-certs-892584 minikube.k8s.io/primary=true
I1026 01:38:11.751741 2083289 ops.go:34] apiserver oom_adj: -16
I1026 01:38:11.751866 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1026 01:38:12.251949 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1026 01:38:12.751990 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1026 01:38:13.251995 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1026 01:38:13.751942 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1026 01:38:14.252702 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1026 01:38:14.752004 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1026 01:38:14.846755 2083289 kubeadm.go:1113] duration metric: took 3.316651442s to wait for elevateKubeSystemPrivileges
I1026 01:38:14.846789 2083289 kubeadm.go:394] duration metric: took 22.657949564s to StartCluster
I1026 01:38:14.846808 2083289 settings.go:142] acquiring lock: {Name:mk5238870f54ce90633b3ed0ddcc81fb678d064e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:38:14.846873 2083289 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19868-1857747/kubeconfig
I1026 01:38:14.848338 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/kubeconfig: {Name:mk1a434cd0cc84bfd2a4a232bfd16b0239e78299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1026 01:38:14.848565 2083289 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1026 01:38:14.848667 2083289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1026 01:38:14.848910 2083289 config.go:182] Loaded profile config "embed-certs-892584": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 01:38:14.848950 2083289 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1026 01:38:14.849036 2083289 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-892584"
I1026 01:38:14.849083 2083289 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-892584"
I1026 01:38:14.849111 2083289 host.go:66] Checking if "embed-certs-892584" exists ...
I1026 01:38:14.849083 2083289 addons.go:69] Setting default-storageclass=true in profile "embed-certs-892584"
I1026 01:38:14.849187 2083289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-892584"
I1026 01:38:14.849498 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
I1026 01:38:14.849563 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
I1026 01:38:14.852124 2083289 out.go:177] * Verifying Kubernetes components...
I1026 01:38:14.854196 2083289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1026 01:38:14.881298 2073170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1026 01:38:14.898252 2073170 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1026 01:38:14.900214 2083289 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1026 01:38:14.902356 2073170 out.go:201]
W1026 01:38:14.905153 2073170 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1026 01:38:14.905189 2073170 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W1026 01:38:14.905207 2073170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W1026 01:38:14.905214 2073170 out.go:270] *
W1026 01:38:14.906019 2073170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1026 01:38:14.907947 2073170 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
6cd756151e892 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 8bcf177240c81 dashboard-metrics-scraper-8d5bb5db8-w4mwk
ed8fe83be8b1e 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 30b684a6011b9 kubernetes-dashboard-cd95d586-zbljx
19f64e2c8ba4c 0bcd66b03df5f 5 minutes ago Running kindnet-cni 1 a6d65de0c3d26 kindnet-5vwks
f4444a86e1f19 ba04bb24b9575 5 minutes ago Running storage-provisioner 1 b3190ae24fe90 storage-provisioner
c8ce92c2bee0e db91994f4ee8f 5 minutes ago Running coredns 1 6c70d2eb24895 coredns-74ff55c5b-q7ksx
9bd96eb6d5a7e 1611cd07b61d5 5 minutes ago Running busybox 1 3622b58680707 busybox
f8701160de76e 25a5233254979 5 minutes ago Running kube-proxy 1 64c136539e749 kube-proxy-9q264
9e91002c8dfb9 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 5dae65cc1dd59 kube-scheduler-old-k8s-version-368787
407cc3b1c2340 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 7c1ec6ea72a39 kube-controller-manager-old-k8s-version-368787
3e88cb5ec2163 05b738aa1bc63 5 minutes ago Running etcd 1 d761b03d84898 etcd-old-k8s-version-368787
caf4499d19d56 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 a43731704d1a0 kube-apiserver-old-k8s-version-368787
c04d640227914 1611cd07b61d5 6 minutes ago Exited busybox 0 066dab64f949f busybox
3f79400ea7617 db91994f4ee8f 8 minutes ago Exited coredns 0 920fdb26a0937 coredns-74ff55c5b-q7ksx
3765e18684825 ba04bb24b9575 8 minutes ago Exited storage-provisioner 0 59f0dfedddaa9 storage-provisioner
720cfd17791b3 0bcd66b03df5f 8 minutes ago Exited kindnet-cni 0 10224234d2ce3 kindnet-5vwks
79f5f9136e040 25a5233254979 8 minutes ago Exited kube-proxy 0 83ca250b827cc kube-proxy-9q264
4cf9033bc9607 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 799d89ab603b2 kube-scheduler-old-k8s-version-368787
ee5aa1f2e06d3 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 e72a21e82106b kube-apiserver-old-k8s-version-368787
5605b568cc91e 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 84b13c66fd5b5 kube-controller-manager-old-k8s-version-368787
19176bbdf5c5a 05b738aa1bc63 8 minutes ago Exited etcd 0 976f3bb9124e3 etcd-old-k8s-version-368787
==> containerd <==
Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.764656284Z" level=info msg="CreateContainer within sandbox \"8bcf177240c81bb947783da60a5fcf55865cf9d6adc6d03ea99e7730ff526a55\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\""
Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.766625844Z" level=info msg="StartContainer for \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\""
Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.855901951Z" level=info msg="StartContainer for \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\" returns successfully"
Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.889862492Z" level=info msg="shim disconnected" id=6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0 namespace=k8s.io
Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.890498118Z" level=warning msg="cleaning up after shim disconnected" id=6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0 namespace=k8s.io
Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.890733330Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 26 01:34:37 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:37.420056486Z" level=info msg="RemoveContainer for \"00721c6a03267f4a57534c88faf6e9e2b4f542cf2c27f2cb95035072fe5fb762\""
Oct 26 01:34:37 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:37.424757166Z" level=info msg="RemoveContainer for \"00721c6a03267f4a57534c88faf6e9e2b4f542cf2c27f2cb95035072fe5fb762\" returns successfully"
Oct 26 01:35:25 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:35:25.732326882Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:35:25 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:35:25.738884136Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Oct 26 01:35:25 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:35:25.741048086Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Oct 26 01:35:25 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:35:25.741115402Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.733707994Z" level=info msg="CreateContainer within sandbox \"8bcf177240c81bb947783da60a5fcf55865cf9d6adc6d03ea99e7730ff526a55\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.748868601Z" level=info msg="CreateContainer within sandbox \"8bcf177240c81bb947783da60a5fcf55865cf9d6adc6d03ea99e7730ff526a55\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01\""
Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.749633017Z" level=info msg="StartContainer for \"6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01\""
Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.827188140Z" level=info msg="StartContainer for \"6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01\" returns successfully"
Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.854002429Z" level=info msg="shim disconnected" id=6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01 namespace=k8s.io
Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.854218637Z" level=warning msg="cleaning up after shim disconnected" id=6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01 namespace=k8s.io
Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.854241513Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 26 01:36:03 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:03.664176345Z" level=info msg="RemoveContainer for \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\""
Oct 26 01:36:03 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:03.669260768Z" level=info msg="RemoveContainer for \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\" returns successfully"
Oct 26 01:38:13 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:38:13.732357088Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:38:13 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:38:13.741192394Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Oct 26 01:38:13 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:38:13.742802745Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Oct 26 01:38:13 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:38:13.742833736Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:42243 - 32852 "HINFO IN 4571955147938569355.6194879312205306998. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024066525s
==> coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:54420 - 55680 "HINFO IN 2594235961846424401.5936807825121529914. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030929455s
==> describe nodes <==
Name: old-k8s-version-368787
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-368787
kubernetes.io/os=linux
minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
minikube.k8s.io/name=old-k8s-version-368787
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_10_26T01_29_39_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 26 Oct 2024 01:29:35 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-368787
AcquireTime: <unset>
RenewTime: Sat, 26 Oct 2024 01:38:11 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 26 Oct 2024 01:33:28 +0000 Sat, 26 Oct 2024 01:29:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 26 Oct 2024 01:33:28 +0000 Sat, 26 Oct 2024 01:29:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 26 Oct 2024 01:33:28 +0000 Sat, 26 Oct 2024 01:29:28 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 26 Oct 2024 01:33:28 +0000 Sat, 26 Oct 2024 01:29:54 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-368787
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: a7d17f95dee04c9cb986384e241b3097
System UUID: 5c99fbfa-38dc-440d-b323-219a37c563dc
Boot ID: efe83352-e52f-4975-85ee-d7fbf692eb79
Kernel Version: 5.15.0-1071-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m39s
kube-system coredns-74ff55c5b-q7ksx 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m23s
kube-system etcd-old-k8s-version-368787 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m30s
kube-system kindnet-5vwks 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m23s
kube-system kube-apiserver-old-k8s-version-368787 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m30s
kube-system kube-controller-manager-old-k8s-version-368787 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m30s
kube-system kube-proxy-9q264 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m23s
kube-system kube-scheduler-old-k8s-version-368787 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m30s
kube-system metrics-server-9975d5f86-v2pwf 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m27s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m22s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-w4mwk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m32s
kubernetes-dashboard kubernetes-dashboard-cd95d586-zbljx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m32s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m50s (x4 over 8m50s) kubelet Node old-k8s-version-368787 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m50s (x4 over 8m50s) kubelet Node old-k8s-version-368787 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m50s (x4 over 8m50s) kubelet Node old-k8s-version-368787 status is now: NodeHasSufficientPID
Normal Starting 8m30s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m30s kubelet Node old-k8s-version-368787 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m30s kubelet Node old-k8s-version-368787 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m30s kubelet Node old-k8s-version-368787 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m30s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m23s kubelet Node old-k8s-version-368787 status is now: NodeReady
Normal Starting 8m22s kube-proxy Starting kube-proxy.
Normal Starting 5m58s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m58s (x8 over 5m58s) kubelet Node old-k8s-version-368787 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m58s (x7 over 5m58s) kubelet Node old-k8s-version-368787 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m58s (x8 over 5m58s) kubelet Node old-k8s-version-368787 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m58s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m47s kube-proxy Starting kube-proxy.
==> dmesg <==
==> etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] <==
raft2024/10/26 01:29:28 INFO: ea7e25599daad906 became candidate at term 2
raft2024/10/26 01:29:28 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2024/10/26 01:29:28 INFO: ea7e25599daad906 became leader at term 2
raft2024/10/26 01:29:28 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2024-10-26 01:29:28.193583 I | etcdserver: setting up the initial cluster version to 3.4
2024-10-26 01:29:28.194337 N | etcdserver/membership: set the initial cluster version to 3.4
2024-10-26 01:29:28.194399 I | etcdserver/api: enabled capabilities for version 3.4
2024-10-26 01:29:28.194436 I | etcdserver: published {Name:old-k8s-version-368787 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2024-10-26 01:29:28.194452 I | embed: ready to serve client requests
2024-10-26 01:29:28.196127 I | embed: serving client requests on 127.0.0.1:2379
2024-10-26 01:29:28.196275 I | embed: ready to serve client requests
2024-10-26 01:29:28.197412 I | embed: serving client requests on 192.168.76.2:2379
2024-10-26 01:29:55.433106 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:29:57.534493 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:30:07.534702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:30:17.534647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:30:27.534484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:30:37.534417 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:30:47.534889 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:30:57.534539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:31:07.534563 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:31:17.534863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:31:27.534462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:31:37.534440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:31:47.534733 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] <==
2024-10-26 01:34:11.752656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:34:21.752888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:34:31.752646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:34:41.752594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:34:51.752723 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:35:01.752552 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:35:11.752632 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:35:21.752674 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:35:31.752473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:35:41.752455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:35:51.752585 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:36:01.752743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:36:11.752418 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:36:21.752651 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:36:31.752450 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:36:41.752639 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:36:51.752593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:37:01.752528 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:37:11.752481 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:37:21.752649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:37:31.752511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:37:41.752402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:37:51.752594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:38:01.764595 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-26 01:38:11.752985 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
01:38:17 up 9:20, 0 users, load average: 3.23, 2.36, 2.48
Linux old-k8s-version-368787 5.15.0-1071-aws #77~20.04.1-Ubuntu SMP Thu Oct 3 19:34:36 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] <==
I1026 01:36:12.996223 1 main.go:300] handling current node
I1026 01:36:23.007222 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:36:23.007270 1 main.go:300] handling current node
I1026 01:36:32.996960 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:36:32.996995 1 main.go:300] handling current node
I1026 01:36:43.004950 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:36:43.004996 1 main.go:300] handling current node
I1026 01:36:53.008268 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:36:53.008557 1 main.go:300] handling current node
I1026 01:37:02.996690 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:37:02.996730 1 main.go:300] handling current node
I1026 01:37:13.003923 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:37:13.004029 1 main.go:300] handling current node
I1026 01:37:23.007266 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:37:23.007306 1 main.go:300] handling current node
I1026 01:37:32.996175 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:37:32.996399 1 main.go:300] handling current node
I1026 01:37:43.008598 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:37:43.008641 1 main.go:300] handling current node
I1026 01:37:53.005212 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:37:53.005265 1 main.go:300] handling current node
I1026 01:38:03.004213 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:38:03.004266 1 main.go:300] handling current node
I1026 01:38:13.009533 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:38:13.009571 1 main.go:300] handling current node
==> kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] <==
I1026 01:29:58.113782 1 shared_informer.go:320] Caches are synced for kube-network-policies
I1026 01:29:58.113812 1 metrics.go:61] Registering metrics
I1026 01:29:58.113872 1 controller.go:378] Syncing nftables rules
I1026 01:30:07.919672 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:30:07.919709 1 main.go:300] handling current node
I1026 01:30:17.913086 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:30:17.913148 1 main.go:300] handling current node
I1026 01:30:27.919497 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:30:27.919534 1 main.go:300] handling current node
I1026 01:30:37.920906 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:30:37.920949 1 main.go:300] handling current node
I1026 01:30:47.920337 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:30:47.920374 1 main.go:300] handling current node
I1026 01:30:57.912928 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:30:57.912965 1 main.go:300] handling current node
I1026 01:31:07.913556 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:31:07.913599 1 main.go:300] handling current node
I1026 01:31:17.920933 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:31:17.920964 1 main.go:300] handling current node
I1026 01:31:27.921894 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:31:27.921930 1 main.go:300] handling current node
I1026 01:31:37.915508 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:31:37.915543 1 main.go:300] handling current node
I1026 01:31:47.913023 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1026 01:31:47.913058 1 main.go:300] handling current node
==> kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] <==
I1026 01:34:51.838069 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:34:51.838079 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1026 01:35:25.916104 1 client.go:360] parsed scheme: "passthrough"
I1026 01:35:25.916146 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:35:25.916156 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1026 01:35:30.781040 1 handler_proxy.go:102] no RequestInfo found in the context
E1026 01:35:30.781123 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1026 01:35:30.781134 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1026 01:36:05.183467 1 client.go:360] parsed scheme: "passthrough"
I1026 01:36:05.183512 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:36:05.183522 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1026 01:36:38.168035 1 client.go:360] parsed scheme: "passthrough"
I1026 01:36:38.168090 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:36:38.168100 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1026 01:37:18.580106 1 client.go:360] parsed scheme: "passthrough"
I1026 01:37:18.580226 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:37:18.580264 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1026 01:37:29.160914 1 handler_proxy.go:102] no RequestInfo found in the context
E1026 01:37:29.161152 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1026 01:37:29.161266 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1026 01:37:57.599711 1 client.go:360] parsed scheme: "passthrough"
I1026 01:37:57.599761 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:37:57.599962 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] <==
I1026 01:29:36.509850 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1026 01:29:36.509881 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1026 01:29:36.560776 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1026 01:29:36.565941 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1026 01:29:36.566800 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1026 01:29:37.039359 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1026 01:29:37.102200 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1026 01:29:37.241378 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I1026 01:29:37.242513 1 controller.go:606] quota admission added evaluator for: endpoints
I1026 01:29:37.246944 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1026 01:29:37.576378 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1026 01:29:38.243722 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1026 01:29:38.680435 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1026 01:29:38.738682 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1026 01:29:54.211914 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1026 01:29:54.412093 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1026 01:30:00.823354 1 client.go:360] parsed scheme: "passthrough"
I1026 01:30:00.823414 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:30:00.823424 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1026 01:30:32.056320 1 client.go:360] parsed scheme: "passthrough"
I1026 01:30:32.056367 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:30:32.056376 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1026 01:31:09.668522 1 client.go:360] parsed scheme: "passthrough"
I1026 01:31:09.668573 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1026 01:31:09.668582 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] <==
W1026 01:33:51.601617 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1026 01:34:19.088035 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1026 01:34:23.252028 1 request.go:655] Throttling request took 1.048418027s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1?timeout=32s
W1026 01:34:24.103608 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1026 01:34:49.589897 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1026 01:34:55.754101 1 request.go:655] Throttling request took 1.048146679s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W1026 01:34:56.605519 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1026 01:35:20.092113 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1026 01:35:28.255930 1 request.go:655] Throttling request took 1.048164262s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
W1026 01:35:29.107377 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1026 01:35:50.594034 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1026 01:36:00.807926 1 request.go:655] Throttling request took 1.048415146s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
W1026 01:36:01.609313 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1026 01:36:21.096067 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1026 01:36:33.259744 1 request.go:655] Throttling request took 1.048192687s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W1026 01:36:34.111365 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1026 01:36:51.597848 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1026 01:37:05.761740 1 request.go:655] Throttling request took 1.048147922s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1026 01:37:06.613115 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1026 01:37:22.100003 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1026 01:37:38.263851 1 request.go:655] Throttling request took 1.048427879s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1026 01:37:39.115306 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1026 01:37:52.602026 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1026 01:38:10.765714 1 request.go:655] Throttling request took 1.048288657s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W1026 01:38:11.617322 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] <==
I1026 01:29:54.414025 1 shared_informer.go:247] Caches are synced for taint
I1026 01:29:54.414353 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W1026 01:29:54.414529 1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-368787. Assuming now as a timestamp.
I1026 01:29:54.414709 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1026 01:29:54.415146 1 taint_manager.go:187] Starting NoExecuteTaintManager
I1026 01:29:54.418083 1 event.go:291] "Event occurred" object="old-k8s-version-368787" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-368787 event: Registered Node old-k8s-version-368787 in Controller"
I1026 01:29:54.430108 1 shared_informer.go:247] Caches are synced for endpoint_slice
I1026 01:29:54.431526 1 shared_informer.go:247] Caches are synced for attach detach
I1026 01:29:54.433327 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-368787" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1026 01:29:54.433470 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5vwks"
I1026 01:29:54.439504 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9q264"
I1026 01:29:54.439606 1 shared_informer.go:247] Caches are synced for persistent volume
E1026 01:29:54.498687 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4a4fba1f-e07a-4fa4-b69a-d21df0994c4b", ResourceVersion:"278", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865502979, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241007-36f62932\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b19f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b19fa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b19fc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b19fe0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b58000), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b58020), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241007-36f62932", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b58040)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b58080)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001b327e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001b3ad48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000aa3340), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000eba0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001b3ad90)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I1026 01:29:54.584137 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1026 01:29:54.853454 1 shared_informer.go:247] Caches are synced for garbage collector
I1026 01:29:54.853520 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1026 01:29:54.884362 1 shared_informer.go:247] Caches are synced for garbage collector
I1026 01:29:55.679004 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I1026 01:29:55.715096 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-ng789"
I1026 01:29:59.414950 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I1026 01:31:49.709426 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I1026 01:31:49.746855 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
E1026 01:31:49.767804 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
E1026 01:31:49.925268 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I1026 01:31:50.902921 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-v2pwf"
==> kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] <==
I1026 01:29:55.548819 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1026 01:29:55.549139 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1026 01:29:55.579247 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1026 01:29:55.579353 1 server_others.go:185] Using iptables Proxier.
I1026 01:29:55.579586 1 server.go:650] Version: v1.20.0
I1026 01:29:55.580083 1 config.go:315] Starting service config controller
I1026 01:29:55.580109 1 shared_informer.go:240] Waiting for caches to sync for service config
I1026 01:29:55.589321 1 config.go:224] Starting endpoint slice config controller
I1026 01:29:55.589350 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1026 01:29:55.685953 1 shared_informer.go:247] Caches are synced for service config
I1026 01:29:55.703436 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] <==
I1026 01:32:30.513230 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1026 01:32:30.513302 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1026 01:32:30.544638 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1026 01:32:30.544738 1 server_others.go:185] Using iptables Proxier.
I1026 01:32:30.544996 1 server.go:650] Version: v1.20.0
I1026 01:32:30.545591 1 config.go:315] Starting service config controller
I1026 01:32:30.545600 1 shared_informer.go:240] Waiting for caches to sync for service config
I1026 01:32:30.545617 1 config.go:224] Starting endpoint slice config controller
I1026 01:32:30.545620 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1026 01:32:30.645738 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1026 01:32:30.645809 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] <==
I1026 01:29:31.447015 1 serving.go:331] Generated self-signed cert in-memory
W1026 01:29:35.728047 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1026 01:29:35.728321 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1026 01:29:35.728507 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1026 01:29:35.728626 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1026 01:29:35.839461 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1026 01:29:35.840057 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1026 01:29:35.840078 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1026 01:29:35.840095 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1026 01:29:35.856387 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1026 01:29:35.859371 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1026 01:29:35.859925 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1026 01:29:35.860033 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1026 01:29:35.860114 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1026 01:29:35.860186 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1026 01:29:35.860253 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1026 01:29:35.860326 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1026 01:29:35.860393 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1026 01:29:35.860457 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1026 01:29:35.860513 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1026 01:29:35.860615 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1026 01:29:36.790794 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1026 01:29:36.837599 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I1026 01:29:37.140248 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] <==
I1026 01:32:22.113409 1 serving.go:331] Generated self-signed cert in-memory
W1026 01:32:28.059294 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1026 01:32:28.059359 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1026 01:32:28.059375 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1026 01:32:28.059381 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1026 01:32:28.243858 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1026 01:32:28.244485 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1026 01:32:28.244496 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1026 01:32:28.244557 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1026 01:32:28.345312 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: I1026 01:36:48.730843 658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: I1026 01:37:02.730885 658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: I1026 01:37:16.730981 658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: I1026 01:37:28.730850 658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: I1026 01:37:42.731103 658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: I1026 01:37:54.730913 658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 26 01:38:05 old-k8s-version-368787 kubelet[658]: I1026 01:38:05.730841 658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
Oct 26 01:38:05 old-k8s-version-368787 kubelet[658]: E1026 01:38:05.731200 658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
Oct 26 01:38:13 old-k8s-version-368787 kubelet[658]: E1026 01:38:13.743102 658 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 26 01:38:13 old-k8s-version-368787 kubelet[658]: E1026 01:38:13.743154 658 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 26 01:38:13 old-k8s-version-368787 kubelet[658]: E1026 01:38:13.743293 658 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-7tsjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1
b-255a-4898-917e-52f20c4e511f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 26 01:38:13 old-k8s-version-368787 kubelet[658]: E1026 01:38:13.743365 658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
==> kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] <==
2024/10/26 01:32:50 Starting overwatch
2024/10/26 01:32:50 Using namespace: kubernetes-dashboard
2024/10/26 01:32:50 Using in-cluster config to connect to apiserver
2024/10/26 01:32:50 Using secret token for csrf signing
2024/10/26 01:32:50 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/10/26 01:32:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/10/26 01:32:50 Successful initial request to the apiserver, version: v1.20.0
2024/10/26 01:32:50 Generating JWE encryption key
2024/10/26 01:32:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/10/26 01:32:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/10/26 01:32:50 Initializing JWE encryption key from synchronized object
2024/10/26 01:32:50 Creating in-cluster Sidecar client
2024/10/26 01:32:50 Serving insecurely on HTTP port: 9090
2024/10/26 01:32:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:33:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:33:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:34:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:34:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:35:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:35:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:36:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:37:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/26 01:37:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] <==
I1026 01:29:57.881908 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1026 01:29:57.904197 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1026 01:29:57.904243 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1026 01:29:57.919389 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1026 01:29:57.919849 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-368787_feb7fabf-98fb-48ab-8c01-82b3c62a2ef0!
I1026 01:29:57.919480 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a95d0560-36d1-497f-a232-dbdd16032885", APIVersion:"v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-368787_feb7fabf-98fb-48ab-8c01-82b3c62a2ef0 became leader
I1026 01:29:58.021067 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-368787_feb7fabf-98fb-48ab-8c01-82b3c62a2ef0!
==> storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] <==
I1026 01:32:32.300209 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1026 01:32:32.319088 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1026 01:32:32.319306 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1026 01:32:49.868673 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1026 01:32:49.868845 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-368787_921e834b-2258-46ac-9188-5c80455cd09d!
I1026 01:32:49.869786 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a95d0560-36d1-497f-a232-dbdd16032885", APIVersion:"v1", ResourceVersion:"789", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-368787_921e834b-2258-46ac-9188-5c80455cd09d became leader
I1026 01:32:49.969347 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-368787_921e834b-2258-46ac-9188-5c80455cd09d!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-368787 -n old-k8s-version-368787
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-368787 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-v2pwf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-368787 describe pod metrics-server-9975d5f86-v2pwf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-368787 describe pod metrics-server-9975d5f86-v2pwf: exit status 1 (150.516053ms)
** stderr **
E1026 01:38:19.296254 2087098 memcache.go:287] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
E1026 01:38:19.310966 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
E1026 01:38:19.320032 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
E1026 01:38:19.326676 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
E1026 01:38:19.337183 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
E1026 01:38:19.341047 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
Error from server (NotFound): pods "metrics-server-9975d5f86-v2pwf" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-368787 describe pod metrics-server-9975d5f86-v2pwf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (375.85s)