=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-856421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E0407 13:24:46.523885 878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:25:07.904020 878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/addons-596243/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:26:43.454970 878594 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/functional-062962/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-856421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m13.185073752s)
-- stdout --
* [old-k8s-version-856421] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20602
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-856421" primary control-plane node in "old-k8s-version-856421" cluster
* Pulling base image v0.0.46-1743675393-20591 ...
* Restarting existing docker container for "old-k8s-version-856421" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-856421 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
-- /stdout --
** stderr **
I0407 13:24:46.161004 1095137 out.go:345] Setting OutFile to fd 1 ...
I0407 13:24:46.161135 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:24:46.161146 1095137 out.go:358] Setting ErrFile to fd 2...
I0407 13:24:46.161153 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:24:46.161415 1095137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
I0407 13:24:46.161854 1095137 out.go:352] Setting JSON to false
I0407 13:24:46.162709 1095137 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18431,"bootTime":1744013856,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0407 13:24:46.162786 1095137 start.go:139] virtualization:
I0407 13:24:46.167498 1095137 out.go:177] * [old-k8s-version-856421] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0407 13:24:46.170397 1095137 out.go:177] - MINIKUBE_LOCATION=20602
I0407 13:24:46.170578 1095137 notify.go:220] Checking for updates...
I0407 13:24:46.176287 1095137 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0407 13:24:46.179194 1095137 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
I0407 13:24:46.182066 1095137 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
I0407 13:24:46.185208 1095137 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0407 13:24:46.188047 1095137 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0407 13:24:46.191363 1095137 config.go:182] Loaded profile config "old-k8s-version-856421": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0407 13:24:46.194842 1095137 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0407 13:24:46.197625 1095137 driver.go:394] Setting default libvirt URI to qemu:///system
I0407 13:24:46.238916 1095137 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0407 13:24:46.239035 1095137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 13:24:46.330987 1095137 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:24:46.320710994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0407 13:24:46.331098 1095137 docker.go:318] overlay module found
I0407 13:24:46.334200 1095137 out.go:177] * Using the docker driver based on existing profile
I0407 13:24:46.336980 1095137 start.go:297] selected driver: docker
I0407 13:24:46.336998 1095137 start.go:901] validating driver "docker" against &{Name:old-k8s-version-856421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:24:46.337100 1095137 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0407 13:24:46.337853 1095137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 13:24:46.425336 1095137 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:24:46.416250605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0407 13:24:46.425679 1095137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0407 13:24:46.425734 1095137 cni.go:84] Creating CNI manager for ""
I0407 13:24:46.425788 1095137 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0407 13:24:46.425834 1095137 start.go:340] cluster config:
{Name:old-k8s-version-856421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:24:46.429205 1095137 out.go:177] * Starting "old-k8s-version-856421" primary control-plane node in "old-k8s-version-856421" cluster
I0407 13:24:46.432139 1095137 cache.go:121] Beginning downloading kic base image for docker with containerd
I0407 13:24:46.435156 1095137 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
I0407 13:24:46.437896 1095137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0407 13:24:46.437939 1095137 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
I0407 13:24:46.437948 1095137 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0407 13:24:46.437972 1095137 cache.go:56] Caching tarball of preloaded images
I0407 13:24:46.438068 1095137 preload.go:172] Found /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0407 13:24:46.438077 1095137 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0407 13:24:46.438185 1095137 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/config.json ...
I0407 13:24:46.458084 1095137 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
I0407 13:24:46.458109 1095137 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
I0407 13:24:46.458128 1095137 cache.go:230] Successfully downloaded all kic artifacts
I0407 13:24:46.458160 1095137 start.go:360] acquireMachinesLock for old-k8s-version-856421: {Name:mka794a348148701ceb7e35cf711bf1e3c93119a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:24:46.458222 1095137 start.go:364] duration metric: took 35.98µs to acquireMachinesLock for "old-k8s-version-856421"
I0407 13:24:46.458246 1095137 start.go:96] Skipping create...Using existing machine configuration
I0407 13:24:46.458256 1095137 fix.go:54] fixHost starting:
I0407 13:24:46.458508 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
I0407 13:24:46.476373 1095137 fix.go:112] recreateIfNeeded on old-k8s-version-856421: state=Stopped err=<nil>
W0407 13:24:46.476406 1095137 fix.go:138] unexpected machine state, will restart: <nil>
I0407 13:24:46.479631 1095137 out.go:177] * Restarting existing docker container for "old-k8s-version-856421" ...
I0407 13:24:46.482729 1095137 cli_runner.go:164] Run: docker start old-k8s-version-856421
I0407 13:24:46.826482 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
I0407 13:24:46.852734 1095137 kic.go:430] container "old-k8s-version-856421" state is running.
I0407 13:24:46.853109 1095137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-856421
I0407 13:24:46.878530 1095137 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/config.json ...
I0407 13:24:46.878839 1095137 machine.go:93] provisionDockerMachine start ...
I0407 13:24:46.878924 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:46.931809 1095137 main.go:141] libmachine: Using SSH client type: native
I0407 13:24:46.932146 1095137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34180 <nil> <nil>}
I0407 13:24:46.932169 1095137 main.go:141] libmachine: About to run SSH command:
hostname
I0407 13:24:46.932749 1095137 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37958->127.0.0.1:34180: read: connection reset by peer
I0407 13:24:50.077668 1095137 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-856421
I0407 13:24:50.077776 1095137 ubuntu.go:169] provisioning hostname "old-k8s-version-856421"
I0407 13:24:50.077892 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:50.115066 1095137 main.go:141] libmachine: Using SSH client type: native
I0407 13:24:50.115420 1095137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34180 <nil> <nil>}
I0407 13:24:50.115433 1095137 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-856421 && echo "old-k8s-version-856421" | sudo tee /etc/hostname
I0407 13:24:50.270350 1095137 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-856421
I0407 13:24:50.270449 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:50.295660 1095137 main.go:141] libmachine: Using SSH client type: native
I0407 13:24:50.295976 1095137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34180 <nil> <nil>}
I0407 13:24:50.295993 1095137 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-856421' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-856421/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-856421' | sudo tee -a /etc/hosts;
fi
fi
I0407 13:24:50.442119 1095137 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0407 13:24:50.442146 1095137 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20602-873072/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-873072/.minikube}
I0407 13:24:50.442165 1095137 ubuntu.go:177] setting up certificates
I0407 13:24:50.442175 1095137 provision.go:84] configureAuth start
I0407 13:24:50.442249 1095137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-856421
I0407 13:24:50.470388 1095137 provision.go:143] copyHostCerts
I0407 13:24:50.470452 1095137 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem, removing ...
I0407 13:24:50.470467 1095137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem
I0407 13:24:50.470540 1095137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem (1078 bytes)
I0407 13:24:50.470643 1095137 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem, removing ...
I0407 13:24:50.470648 1095137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem
I0407 13:24:50.470676 1095137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem (1123 bytes)
I0407 13:24:50.470730 1095137 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem, removing ...
I0407 13:24:50.470735 1095137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem
I0407 13:24:50.470759 1095137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem (1675 bytes)
I0407 13:24:50.470816 1095137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-856421 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-856421]
I0407 13:24:51.028545 1095137 provision.go:177] copyRemoteCerts
I0407 13:24:51.028624 1095137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0407 13:24:51.028672 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:51.063797 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:51.162602 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0407 13:24:51.208565 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0407 13:24:51.257185 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0407 13:24:51.304039 1095137 provision.go:87] duration metric: took 861.849756ms to configureAuth
I0407 13:24:51.304074 1095137 ubuntu.go:193] setting minikube options for container-runtime
I0407 13:24:51.304290 1095137 config.go:182] Loaded profile config "old-k8s-version-856421": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0407 13:24:51.304305 1095137 machine.go:96] duration metric: took 4.425450441s to provisionDockerMachine
I0407 13:24:51.304313 1095137 start.go:293] postStartSetup for "old-k8s-version-856421" (driver="docker")
I0407 13:24:51.304329 1095137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0407 13:24:51.304389 1095137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0407 13:24:51.304432 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:51.335510 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:51.447227 1095137 ssh_runner.go:195] Run: cat /etc/os-release
I0407 13:24:51.450895 1095137 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0407 13:24:51.450941 1095137 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0407 13:24:51.450952 1095137 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0407 13:24:51.450960 1095137 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0407 13:24:51.450975 1095137 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-873072/.minikube/addons for local assets ...
I0407 13:24:51.451037 1095137 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-873072/.minikube/files for local assets ...
I0407 13:24:51.451115 1095137 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem -> 8785942.pem in /etc/ssl/certs
I0407 13:24:51.451218 1095137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0407 13:24:51.460239 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem --> /etc/ssl/certs/8785942.pem (1708 bytes)
I0407 13:24:51.503517 1095137 start.go:296] duration metric: took 199.182857ms for postStartSetup
I0407 13:24:51.503669 1095137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0407 13:24:51.503767 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:51.547336 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:51.650357 1095137 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0407 13:24:51.656394 1095137 fix.go:56] duration metric: took 5.198130306s for fixHost
I0407 13:24:51.656426 1095137 start.go:83] releasing machines lock for "old-k8s-version-856421", held for 5.198183172s
I0407 13:24:51.656501 1095137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-856421
I0407 13:24:51.676357 1095137 ssh_runner.go:195] Run: cat /version.json
I0407 13:24:51.676406 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:51.676652 1095137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0407 13:24:51.676704 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:51.713680 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:51.717916 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:51.833459 1095137 ssh_runner.go:195] Run: systemctl --version
I0407 13:24:51.973539 1095137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0407 13:24:51.977913 1095137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0407 13:24:52.015324 1095137 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0407 13:24:52.015404 1095137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0407 13:24:52.028173 1095137 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0407 13:24:52.028196 1095137 start.go:495] detecting cgroup driver to use...
I0407 13:24:52.028231 1095137 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 13:24:52.028281 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0407 13:24:52.046816 1095137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0407 13:24:52.066195 1095137 docker.go:217] disabling cri-docker service (if available) ...
I0407 13:24:52.066320 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0407 13:24:52.086366 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0407 13:24:52.103082 1095137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0407 13:24:52.234689 1095137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0407 13:24:52.376217 1095137 docker.go:233] disabling docker service ...
I0407 13:24:52.376335 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0407 13:24:52.392912 1095137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0407 13:24:52.406141 1095137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0407 13:24:52.553265 1095137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0407 13:24:52.701543 1095137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0407 13:24:52.719714 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 13:24:52.740464 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0407 13:24:52.761072 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0407 13:24:52.774185 1095137 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0407 13:24:52.774315 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0407 13:24:52.786703 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 13:24:52.797781 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0407 13:24:52.812284 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 13:24:52.822016 1095137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0407 13:24:52.839288 1095137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0407 13:24:52.848947 1095137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0407 13:24:52.859599 1095137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0407 13:24:52.874303 1095137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:24:53.032277 1095137 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0407 13:24:53.377175 1095137 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0407 13:24:53.377305 1095137 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0407 13:24:53.386159 1095137 start.go:563] Will wait 60s for crictl version
I0407 13:24:53.386284 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:24:53.396526 1095137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0407 13:24:53.464817 1095137 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0407 13:24:53.464939 1095137 ssh_runner.go:195] Run: containerd --version
I0407 13:24:53.507756 1095137 ssh_runner.go:195] Run: containerd --version
I0407 13:24:53.545938 1095137 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
I0407 13:24:53.549123 1095137 cli_runner.go:164] Run: docker network inspect old-k8s-version-856421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 13:24:53.583640 1095137 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0407 13:24:53.587772 1095137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 13:24:53.608178 1095137 kubeadm.go:883] updating cluster {Name:old-k8s-version-856421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0407 13:24:53.608290 1095137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0407 13:24:53.608346 1095137 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 13:24:53.693203 1095137 containerd.go:627] all images are preloaded for containerd runtime.
I0407 13:24:53.693223 1095137 containerd.go:534] Images already preloaded, skipping extraction
I0407 13:24:53.693287 1095137 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 13:24:53.757891 1095137 containerd.go:627] all images are preloaded for containerd runtime.
I0407 13:24:53.757913 1095137 cache_images.go:84] Images are preloaded, skipping loading
I0407 13:24:53.757922 1095137 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0407 13:24:53.758058 1095137 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-856421 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0407 13:24:53.758128 1095137 ssh_runner.go:195] Run: sudo crictl info
I0407 13:24:53.827432 1095137 cni.go:84] Creating CNI manager for ""
I0407 13:24:53.827514 1095137 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0407 13:24:53.827539 1095137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0407 13:24:53.827598 1095137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-856421 NodeName:old-k8s-version-856421 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0407 13:24:53.827779 1095137 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-856421"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0407 13:24:53.827899 1095137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0407 13:24:53.840463 1095137 binaries.go:44] Found k8s binaries, skipping transfer
I0407 13:24:53.840616 1095137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0407 13:24:53.852232 1095137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0407 13:24:53.880832 1095137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0407 13:24:53.922101 1095137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0407 13:24:53.948500 1095137 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0407 13:24:53.952579 1095137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 13:24:53.972855 1095137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:24:54.124760 1095137 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 13:24:54.149747 1095137 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421 for IP: 192.168.76.2
I0407 13:24:54.149840 1095137 certs.go:194] generating shared ca certs ...
I0407 13:24:54.149871 1095137 certs.go:226] acquiring lock for ca certs: {Name:mk03094d90434f2a42c24ebaddfee021594c5911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:24:54.150093 1095137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-873072/.minikube/ca.key
I0407 13:24:54.150186 1095137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.key
I0407 13:24:54.150213 1095137 certs.go:256] generating profile certs ...
I0407 13:24:54.150356 1095137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/client.key
I0407 13:24:54.150477 1095137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/apiserver.key.67e5f325
I0407 13:24:54.150562 1095137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/proxy-client.key
I0407 13:24:54.150727 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594.pem (1338 bytes)
W0407 13:24:54.150788 1095137 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594_empty.pem, impossibly tiny 0 bytes
I0407 13:24:54.150818 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem (1675 bytes)
I0407 13:24:54.150872 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem (1078 bytes)
I0407 13:24:54.150932 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem (1123 bytes)
I0407 13:24:54.150987 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem (1675 bytes)
I0407 13:24:54.151069 1095137 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem (1708 bytes)
I0407 13:24:54.151937 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0407 13:24:54.234245 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0407 13:24:54.301656 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0407 13:24:54.387687 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0407 13:24:54.442999 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0407 13:24:54.475508 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0407 13:24:54.522235 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0407 13:24:54.575107 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/old-k8s-version-856421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0407 13:24:54.619521 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0407 13:24:54.675081 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594.pem --> /usr/share/ca-certificates/878594.pem (1338 bytes)
I0407 13:24:54.721551 1095137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem --> /usr/share/ca-certificates/8785942.pem (1708 bytes)
I0407 13:24:54.748788 1095137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0407 13:24:54.769117 1095137 ssh_runner.go:195] Run: openssl version
I0407 13:24:54.775557 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/878594.pem && ln -fs /usr/share/ca-certificates/878594.pem /etc/ssl/certs/878594.pem"
I0407 13:24:54.787148 1095137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/878594.pem
I0407 13:24:54.791245 1095137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 7 12:44 /usr/share/ca-certificates/878594.pem
I0407 13:24:54.791358 1095137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/878594.pem
I0407 13:24:54.799240 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/878594.pem /etc/ssl/certs/51391683.0"
I0407 13:24:54.809136 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8785942.pem && ln -fs /usr/share/ca-certificates/8785942.pem /etc/ssl/certs/8785942.pem"
I0407 13:24:54.823430 1095137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8785942.pem
I0407 13:24:54.830333 1095137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 7 12:44 /usr/share/ca-certificates/8785942.pem
I0407 13:24:54.830479 1095137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8785942.pem
I0407 13:24:54.837690 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8785942.pem /etc/ssl/certs/3ec20f2e.0"
I0407 13:24:54.852382 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0407 13:24:54.870786 1095137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0407 13:24:54.874782 1095137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 7 12:37 /usr/share/ca-certificates/minikubeCA.pem
I0407 13:24:54.874898 1095137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0407 13:24:54.883034 1095137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0407 13:24:54.896267 1095137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0407 13:24:54.900575 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0407 13:24:54.914263 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0407 13:24:54.921345 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0407 13:24:54.938656 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0407 13:24:54.954012 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0407 13:24:54.970637 1095137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0407 13:24:54.977897 1095137 kubeadm.go:392] StartCluster: {Name:old-k8s-version-856421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-856421 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:24:54.978065 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0407 13:24:54.978165 1095137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0407 13:24:55.051587 1095137 cri.go:89] found id: "e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
I0407 13:24:55.051673 1095137 cri.go:89] found id: "b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
I0407 13:24:55.051695 1095137 cri.go:89] found id: "1e215f6f3ad8e0bd3b6e794eeed7be2edfdd8c13538897b791d2e8e1db120357"
I0407 13:24:55.051717 1095137 cri.go:89] found id: "77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
I0407 13:24:55.051751 1095137 cri.go:89] found id: "ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
I0407 13:24:55.051776 1095137 cri.go:89] found id: "c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
I0407 13:24:55.051798 1095137 cri.go:89] found id: "d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
I0407 13:24:55.051831 1095137 cri.go:89] found id: "2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
I0407 13:24:55.051851 1095137 cri.go:89] found id: ""
I0407 13:24:55.051942 1095137 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0407 13:24:55.068842 1095137 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-04-07T13:24:55Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0407 13:24:55.068990 1095137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0407 13:24:55.078773 1095137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0407 13:24:55.078847 1095137 kubeadm.go:593] restartPrimaryControlPlane start ...
I0407 13:24:55.078933 1095137 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0407 13:24:55.091929 1095137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0407 13:24:55.092678 1095137 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-856421" does not appear in /home/jenkins/minikube-integration/20602-873072/kubeconfig
I0407 13:24:55.093027 1095137 kubeconfig.go:62] /home/jenkins/minikube-integration/20602-873072/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-856421" cluster setting kubeconfig missing "old-k8s-version-856421" context setting]
I0407 13:24:55.093689 1095137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-873072/kubeconfig: {Name:mk9de2da01a51fd73232a20700f86bdc259a91ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:24:55.095732 1095137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0407 13:24:55.115393 1095137 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0407 13:24:55.115503 1095137 kubeadm.go:597] duration metric: took 36.611741ms to restartPrimaryControlPlane
I0407 13:24:55.115546 1095137 kubeadm.go:394] duration metric: took 137.658086ms to StartCluster
I0407 13:24:55.115580 1095137 settings.go:142] acquiring lock: {Name:mk3e960f3698515246acbd5cb37ff276e0a43a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:24:55.115675 1095137 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20602-873072/kubeconfig
I0407 13:24:55.116753 1095137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-873072/kubeconfig: {Name:mk9de2da01a51fd73232a20700f86bdc259a91ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:24:55.117076 1095137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0407 13:24:55.117611 1095137 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0407 13:24:55.117757 1095137 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-856421"
I0407 13:24:55.117775 1095137 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-856421"
W0407 13:24:55.117782 1095137 addons.go:247] addon storage-provisioner should already be in state true
I0407 13:24:55.117810 1095137 host.go:66] Checking if "old-k8s-version-856421" exists ...
I0407 13:24:55.118623 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
I0407 13:24:55.119146 1095137 config.go:182] Loaded profile config "old-k8s-version-856421": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0407 13:24:55.119272 1095137 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-856421"
I0407 13:24:55.119287 1095137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-856421"
I0407 13:24:55.119614 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
I0407 13:24:55.123141 1095137 addons.go:69] Setting dashboard=true in profile "old-k8s-version-856421"
I0407 13:24:55.123180 1095137 addons.go:238] Setting addon dashboard=true in "old-k8s-version-856421"
W0407 13:24:55.123189 1095137 addons.go:247] addon dashboard should already be in state true
I0407 13:24:55.123229 1095137 host.go:66] Checking if "old-k8s-version-856421" exists ...
I0407 13:24:55.123809 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
I0407 13:24:55.132606 1095137 out.go:177] * Verifying Kubernetes components...
I0407 13:24:55.141616 1095137 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-856421"
I0407 13:24:55.141665 1095137 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-856421"
W0407 13:24:55.141686 1095137 addons.go:247] addon metrics-server should already be in state true
I0407 13:24:55.141763 1095137 host.go:66] Checking if "old-k8s-version-856421" exists ...
I0407 13:24:55.149971 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
I0407 13:24:55.170943 1095137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:24:55.198462 1095137 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0407 13:24:55.201616 1095137 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-856421"
W0407 13:24:55.201636 1095137 addons.go:247] addon default-storageclass should already be in state true
I0407 13:24:55.201661 1095137 host.go:66] Checking if "old-k8s-version-856421" exists ...
I0407 13:24:55.202163 1095137 cli_runner.go:164] Run: docker container inspect old-k8s-version-856421 --format={{.State.Status}}
I0407 13:24:55.202436 1095137 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:24:55.202451 1095137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0407 13:24:55.202501 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:55.213246 1095137 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0407 13:24:55.216428 1095137 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0407 13:24:55.221796 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0407 13:24:55.221827 1095137 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0407 13:24:55.221912 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:55.231666 1095137 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0407 13:24:55.235673 1095137 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0407 13:24:55.235699 1095137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0407 13:24:55.235767 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:55.286462 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:55.290799 1095137 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0407 13:24:55.290818 1095137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0407 13:24:55.290878 1095137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-856421
I0407 13:24:55.291302 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:55.291216 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:55.324008 1095137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/old-k8s-version-856421/id_rsa Username:docker}
I0407 13:24:55.419836 1095137 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 13:24:55.459453 1095137 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-856421" to be "Ready" ...
I0407 13:24:55.557056 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0407 13:24:55.557082 1095137 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0407 13:24:55.564156 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:24:55.603736 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0407 13:24:55.603814 1095137 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0407 13:24:55.629628 1095137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0407 13:24:55.629718 1095137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0407 13:24:55.696378 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:24:55.699461 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0407 13:24:55.699535 1095137 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0407 13:24:55.702672 1095137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0407 13:24:55.702742 1095137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0407 13:24:55.750272 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0407 13:24:55.750345 1095137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0407 13:24:55.817885 1095137 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:24:55.817972 1095137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0407 13:24:55.845934 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0407 13:24:55.846017 1095137 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0407 13:24:55.905120 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:24:55.954939 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0407 13:24:55.955018 1095137 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
W0407 13:24:55.992305 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:55.992354 1095137 retry.go:31] will retry after 132.758073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.078285 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0407 13:24:56.078325 1095137 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0407 13:24:56.125629 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0407 13:24:56.170815 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.170911 1095137 retry.go:31] will retry after 227.201927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:24:56.197616 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.197692 1095137 retry.go:31] will retry after 311.814515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.212175 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0407 13:24:56.212245 1095137 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0407 13:24:56.277359 1095137 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:24:56.277440 1095137 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0407 13:24:56.336334 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0407 13:24:56.343584 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.343684 1095137 retry.go:31] will retry after 274.037386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.398936 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:24:56.509870 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:24:56.557803 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.557838 1095137 retry.go:31] will retry after 291.088396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:24:56.615527 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.615561 1095137 retry.go:31] will retry after 227.116627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.618960 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0407 13:24:56.717986 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.718041 1095137 retry.go:31] will retry after 272.338008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:24:56.805934 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.805977 1095137 retry.go:31] will retry after 816.114206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:56.843237 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:24:56.849626 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:24:56.990572 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:24:57.048486 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.048520 1095137 retry.go:31] will retry after 817.098811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:24:57.062400 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.062436 1095137 retry.go:31] will retry after 459.979601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:24:57.184233 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.184264 1095137 retry.go:31] will retry after 561.461539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.460884 1095137 node_ready.go:53] error getting node "old-k8s-version-856421": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-856421": dial tcp 192.168.76.2:8443: connect: connection refused
I0407 13:24:57.523205 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:24:57.622505 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0407 13:24:57.666888 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.666922 1095137 retry.go:31] will retry after 603.30577ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.746155 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:24:57.784022 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.784051 1095137 retry.go:31] will retry after 609.881854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.866392 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0407 13:24:57.901461 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:57.901494 1095137 retry.go:31] will retry after 489.132058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:24:58.045921 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:58.045954 1095137 retry.go:31] will retry after 1.187060245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:58.271310 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:24:58.391771 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:24:58.394102 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0407 13:24:58.455299 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:58.455333 1095137 retry.go:31] will retry after 468.170275ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:24:58.671682 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:58.671714 1095137 retry.go:31] will retry after 748.328959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:24:58.692920 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:58.692953 1095137 retry.go:31] will retry after 1.446493979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:58.924449 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0407 13:24:59.071228 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:59.071270 1095137 retry.go:31] will retry after 1.395417222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:59.233659 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0407 13:24:59.371726 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:59.371764 1095137 retry.go:31] will retry after 679.891518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:59.421077 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0407 13:24:59.571821 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:59.571853 1095137 retry.go:31] will retry after 2.379875632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:24:59.960761 1095137 node_ready.go:53] error getting node "old-k8s-version-856421": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-856421": dial tcp 192.168.76.2:8443: connect: connection refused
I0407 13:25:00.052093 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:25:00.139830 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:25:00.304885 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:00.304935 1095137 retry.go:31] will retry after 2.255398456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:25:00.356868 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:00.356902 1095137 retry.go:31] will retry after 2.777099262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:00.467884 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0407 13:25:00.613682 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:00.613742 1095137 retry.go:31] will retry after 1.437947407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:01.952542 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:25:02.052292 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0407 13:25:02.053320 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:02.053353 1095137 retry.go:31] will retry after 1.988995677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:25:02.152406 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:02.152443 1095137 retry.go:31] will retry after 3.212708422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:02.460172 1095137 node_ready.go:53] error getting node "old-k8s-version-856421": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-856421": dial tcp 192.168.76.2:8443: connect: connection refused
I0407 13:25:02.560458 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0407 13:25:02.650796 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:02.650827 1095137 retry.go:31] will retry after 2.59528773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:03.134512 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:25:03.225880 1095137 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:03.225913 1095137 retry.go:31] will retry after 1.815071135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:25:04.043222 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:25:05.041939 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:25:05.247084 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:25:05.365836 1095137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:25:14.339410 1095137 node_ready.go:49] node "old-k8s-version-856421" has status "Ready":"True"
I0407 13:25:14.339430 1095137 node_ready.go:38] duration metric: took 18.879889633s for node "old-k8s-version-856421" to be "Ready" ...
I0407 13:25:14.339439 1095137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 13:25:14.517005 1095137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-gtrrb" in "kube-system" namespace to be "Ready" ...
I0407 13:25:14.555802 1095137 pod_ready.go:93] pod "coredns-74ff55c5b-gtrrb" in "kube-system" namespace has status "Ready":"True"
I0407 13:25:14.555884 1095137 pod_ready.go:82] duration metric: took 38.849882ms for pod "coredns-74ff55c5b-gtrrb" in "kube-system" namespace to be "Ready" ...
I0407 13:25:14.555910 1095137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
I0407 13:25:15.856467 1095137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.813209189s)
I0407 13:25:15.856562 1095137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.814599286s)
I0407 13:25:15.856580 1095137 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-856421"
I0407 13:25:15.856613 1095137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.609508359s)
I0407 13:25:15.930003 1095137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.564122108s)
I0407 13:25:15.933293 1095137 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-856421 addons enable metrics-server
I0407 13:25:15.936296 1095137 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
I0407 13:25:15.939099 1095137 addons.go:514] duration metric: took 20.821488345s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
I0407 13:25:16.564190 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:19.061462 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:21.561159 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:24.136004 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:26.561246 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:28.562608 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:31.066030 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:33.562442 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:35.563149 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:38.062532 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:40.062712 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:42.561848 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:45.073485 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:47.561302 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:49.566750 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:52.062390 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:54.562235 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:57.061737 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:25:59.561403 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:01.562652 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:04.061829 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:06.062380 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:08.561652 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:10.567855 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:13.061107 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:15.062703 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:17.562161 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:20.062292 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:22.562573 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:24.600696 1095137 pod_ready.go:103] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:27.062921 1095137 pod_ready.go:93] pod "etcd-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"True"
I0407 13:26:27.062962 1095137 pod_ready.go:82] duration metric: took 1m12.507027795s for pod "etcd-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
I0407 13:26:27.062980 1095137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
I0407 13:26:27.068010 1095137 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"True"
I0407 13:26:27.068037 1095137 pod_ready.go:82] duration metric: took 5.04964ms for pod "kube-apiserver-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
I0407 13:26:27.068051 1095137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
I0407 13:26:29.073533 1095137 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:31.073993 1095137 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:33.574731 1095137 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:35.074653 1095137 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"True"
I0407 13:26:35.074681 1095137 pod_ready.go:82] duration metric: took 8.006621835s for pod "kube-controller-manager-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
I0407 13:26:35.074695 1095137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j5fsn" in "kube-system" namespace to be "Ready" ...
I0407 13:26:35.081073 1095137 pod_ready.go:93] pod "kube-proxy-j5fsn" in "kube-system" namespace has status "Ready":"True"
I0407 13:26:35.081099 1095137 pod_ready.go:82] duration metric: took 6.395638ms for pod "kube-proxy-j5fsn" in "kube-system" namespace to be "Ready" ...
I0407 13:26:35.081112 1095137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
I0407 13:26:35.086507 1095137 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-856421" in "kube-system" namespace has status "Ready":"True"
I0407 13:26:35.086539 1095137 pod_ready.go:82] duration metric: took 5.419271ms for pod "kube-scheduler-old-k8s-version-856421" in "kube-system" namespace to be "Ready" ...
I0407 13:26:35.086551 1095137 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace to be "Ready" ...
I0407 13:26:37.092752 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:39.592070 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:41.592146 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:43.592330 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:45.592393 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:47.592493 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:49.592720 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:52.092514 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:54.591788 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:56.592197 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:26:58.592770 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:00.593086 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:02.593281 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:04.593492 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:06.594455 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:09.092036 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:11.092074 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:13.594376 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:16.091807 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:18.092934 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:20.592561 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:22.592787 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:25.093408 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:27.593290 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:30.096676 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:32.592208 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:34.592875 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:37.091999 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:39.092356 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:41.593206 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:43.594362 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:46.092634 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:48.591768 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:50.592248 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:52.592293 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:55.092777 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:57.092834 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:27:59.093062 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:01.592797 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:03.595050 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:06.094410 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:08.591958 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:10.593138 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:12.593370 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:15.093337 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:17.593583 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:20.093257 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:22.593250 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:25.092741 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:27.098752 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:29.593024 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:32.094121 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:34.103261 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:36.593128 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:38.593462 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:41.091749 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:43.592360 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:45.595526 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:48.093132 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:50.592196 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:52.593320 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:55.093516 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:57.093911 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:28:59.593224 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:01.594270 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:04.092242 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:06.092347 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:08.592336 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:10.592427 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:13.092295 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:15.093464 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:17.592666 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:20.093234 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:22.093400 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:24.593422 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:26.594636 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:29.093768 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:31.591467 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:33.600997 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:36.098155 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:38.592286 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:40.594048 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:43.092617 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:45.095519 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:47.594708 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:50.095158 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:52.100030 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:54.592366 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:57.092221 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:29:59.593015 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:01.595873 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:04.093251 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:06.591971 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:08.592776 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:11.092451 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:13.594829 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:16.094581 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:18.595415 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:21.092452 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:23.093188 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:25.104796 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:27.592620 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:29.594193 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:32.091750 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:34.592357 1095137 pod_ready.go:103] pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:35.093206 1095137 pod_ready.go:82] duration metric: took 4m0.006638227s for pod "metrics-server-9975d5f86-tkvrz" in "kube-system" namespace to be "Ready" ...
E0407 13:30:35.093237 1095137 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0407 13:30:35.093259 1095137 pod_ready.go:39] duration metric: took 5m20.753795595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 13:30:35.093276 1095137 api_server.go:52] waiting for apiserver process to appear ...
I0407 13:30:35.093322 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0407 13:30:35.093383 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0407 13:30:35.142863 1095137 cri.go:89] found id: "a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
I0407 13:30:35.142893 1095137 cri.go:89] found id: "d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
I0407 13:30:35.142899 1095137 cri.go:89] found id: ""
I0407 13:30:35.142907 1095137 logs.go:282] 2 containers: [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b]
I0407 13:30:35.143001 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.147050 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.150870 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0407 13:30:35.150942 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0407 13:30:35.190454 1095137 cri.go:89] found id: "5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
I0407 13:30:35.190476 1095137 cri.go:89] found id: "ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
I0407 13:30:35.190482 1095137 cri.go:89] found id: ""
I0407 13:30:35.190489 1095137 logs.go:282] 2 containers: [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735]
I0407 13:30:35.190556 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.194338 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.198054 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0407 13:30:35.198130 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0407 13:30:35.243104 1095137 cri.go:89] found id: "051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
I0407 13:30:35.243125 1095137 cri.go:89] found id: "e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
I0407 13:30:35.243130 1095137 cri.go:89] found id: ""
I0407 13:30:35.243137 1095137 logs.go:282] 2 containers: [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce]
I0407 13:30:35.243196 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.246980 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.250601 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0407 13:30:35.250676 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0407 13:30:35.290776 1095137 cri.go:89] found id: "d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
I0407 13:30:35.290800 1095137 cri.go:89] found id: "c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
I0407 13:30:35.290805 1095137 cri.go:89] found id: ""
I0407 13:30:35.290813 1095137 logs.go:282] 2 containers: [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030]
I0407 13:30:35.290924 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.294717 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.298053 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0407 13:30:35.298125 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0407 13:30:35.339141 1095137 cri.go:89] found id: "74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
I0407 13:30:35.339176 1095137 cri.go:89] found id: "77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
I0407 13:30:35.339182 1095137 cri.go:89] found id: ""
I0407 13:30:35.339192 1095137 logs.go:282] 2 containers: [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b]
I0407 13:30:35.339260 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.343444 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.347381 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0407 13:30:35.347466 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0407 13:30:35.386505 1095137 cri.go:89] found id: "04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
I0407 13:30:35.386572 1095137 cri.go:89] found id: "2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
I0407 13:30:35.386590 1095137 cri.go:89] found id: ""
I0407 13:30:35.386605 1095137 logs.go:282] 2 containers: [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2]
I0407 13:30:35.386672 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.391142 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.395064 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0407 13:30:35.395142 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0407 13:30:35.434125 1095137 cri.go:89] found id: "e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
I0407 13:30:35.434150 1095137 cri.go:89] found id: "b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
I0407 13:30:35.434156 1095137 cri.go:89] found id: ""
I0407 13:30:35.434163 1095137 logs.go:282] 2 containers: [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb]
I0407 13:30:35.434247 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.438141 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.441512 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0407 13:30:35.441726 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0407 13:30:35.481889 1095137 cri.go:89] found id: "2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
I0407 13:30:35.481955 1095137 cri.go:89] found id: "d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
I0407 13:30:35.481974 1095137 cri.go:89] found id: ""
I0407 13:30:35.481998 1095137 logs.go:282] 2 containers: [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849]
I0407 13:30:35.482078 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.485908 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.489672 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0407 13:30:35.489809 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0407 13:30:35.530554 1095137 cri.go:89] found id: "3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
I0407 13:30:35.530621 1095137 cri.go:89] found id: ""
I0407 13:30:35.530643 1095137 logs.go:282] 1 containers: [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625]
I0407 13:30:35.530739 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:35.534295 1095137 logs.go:123] Gathering logs for etcd [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5] ...
I0407 13:30:35.534319 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
I0407 13:30:35.584074 1095137 logs.go:123] Gathering logs for kube-proxy [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088] ...
I0407 13:30:35.584106 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
I0407 13:30:35.624129 1095137 logs.go:123] Gathering logs for kindnet [b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb] ...
I0407 13:30:35.624158 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
I0407 13:30:35.665670 1095137 logs.go:123] Gathering logs for container status ...
I0407 13:30:35.665751 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0407 13:30:35.721123 1095137 logs.go:123] Gathering logs for kube-apiserver [d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b] ...
I0407 13:30:35.721154 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
I0407 13:30:35.776096 1095137 logs.go:123] Gathering logs for kube-scheduler [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7] ...
I0407 13:30:35.776130 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
I0407 13:30:35.819279 1095137 logs.go:123] Gathering logs for kube-scheduler [c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030] ...
I0407 13:30:35.819309 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
I0407 13:30:35.872048 1095137 logs.go:123] Gathering logs for kube-apiserver [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af] ...
I0407 13:30:35.872080 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
I0407 13:30:35.957224 1095137 logs.go:123] Gathering logs for etcd [ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735] ...
I0407 13:30:35.957262 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
I0407 13:30:36.000405 1095137 logs.go:123] Gathering logs for coredns [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a] ...
I0407 13:30:36.000487 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
I0407 13:30:36.045995 1095137 logs.go:123] Gathering logs for coredns [e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce] ...
I0407 13:30:36.046027 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
I0407 13:30:36.101062 1095137 logs.go:123] Gathering logs for storage-provisioner [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61] ...
I0407 13:30:36.101096 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
I0407 13:30:36.151700 1095137 logs.go:123] Gathering logs for storage-provisioner [d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849] ...
I0407 13:30:36.151732 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
I0407 13:30:36.197533 1095137 logs.go:123] Gathering logs for kubelet ...
I0407 13:30:36.197582 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0407 13:30:36.257383 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307390 667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:36.257840 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307927 667 reflector.go:138] object-"kube-system"/"kube-proxy-token-j6crq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j6crq" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:36.258057 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308116 667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:36.258285 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308292 667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-nvxlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-nvxlj" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:36.258499 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308445 667 reflector.go:138] object-"default"/"default-token-znh7g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-znh7g" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:36.258716 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308590 667 reflector.go:138] object-"kube-system"/"kindnet-token-fxnc5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-fxnc5" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:36.258927 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308753 667 reflector.go:138] object-"kube-system"/"coredns-token-sjxkg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-sjxkg" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:36.265181 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:15 old-k8s-version-856421 kubelet[667]: E0407 13:25:15.094522 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:36.269525 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:16 old-k8s-version-856421 kubelet[667]: E0407 13:25:16.056738 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.273083 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:27 old-k8s-version-856421 kubelet[667]: E0407 13:25:27.890508 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:36.274796 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:40 old-k8s-version-856421 kubelet[667]: E0407 13:25:40.901804 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.275384 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:42 old-k8s-version-856421 kubelet[667]: E0407 13:25:42.190528 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.276045 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:43 old-k8s-version-856421 kubelet[667]: E0407 13:25:43.194128 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.276493 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:47 old-k8s-version-856421 kubelet[667]: E0407 13:25:47.208836 667 pod_workers.go:191] Error syncing pod ffa09209-8141-4692-8b43-e212485a4adb ("storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"
W0407 13:30:36.276818 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:49 old-k8s-version-856421 kubelet[667]: E0407 13:25:49.601173 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.279591 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:55 old-k8s-version-856421 kubelet[667]: E0407 13:25:55.894550 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:36.280308 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:02 old-k8s-version-856421 kubelet[667]: E0407 13:26:02.259589 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.280492 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:07 old-k8s-version-856421 kubelet[667]: E0407 13:26:07.882067 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.280816 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:09 old-k8s-version-856421 kubelet[667]: E0407 13:26:09.601119 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.280999 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:21 old-k8s-version-856421 kubelet[667]: E0407 13:26:21.882035 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.281582 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:23 old-k8s-version-856421 kubelet[667]: E0407 13:26:23.333979 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.281911 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:29 old-k8s-version-856421 kubelet[667]: E0407 13:26:29.601138 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.282096 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:33 old-k8s-version-856421 kubelet[667]: E0407 13:26:33.882060 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.282424 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:42 old-k8s-version-856421 kubelet[667]: E0407 13:26:42.882285 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.284867 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:46 old-k8s-version-856421 kubelet[667]: E0407 13:26:46.916880 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:36.285192 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:53 old-k8s-version-856421 kubelet[667]: E0407 13:26:53.881641 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.285375 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:58 old-k8s-version-856421 kubelet[667]: E0407 13:26:58.887165 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.285969 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:05 old-k8s-version-856421 kubelet[667]: E0407 13:27:05.451459 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.286294 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:09 old-k8s-version-856421 kubelet[667]: E0407 13:27:09.601083 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.286481 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:11 old-k8s-version-856421 kubelet[667]: E0407 13:27:11.882020 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.286805 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:22 old-k8s-version-856421 kubelet[667]: E0407 13:27:22.882870 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.286988 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:24 old-k8s-version-856421 kubelet[667]: E0407 13:27:24.883611 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.287316 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:33 old-k8s-version-856421 kubelet[667]: E0407 13:27:33.881645 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.287503 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:36 old-k8s-version-856421 kubelet[667]: E0407 13:27:36.883495 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.287827 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:46 old-k8s-version-856421 kubelet[667]: E0407 13:27:46.882237 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.288011 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:49 old-k8s-version-856421 kubelet[667]: E0407 13:27:49.882049 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.288193 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:00 old-k8s-version-856421 kubelet[667]: E0407 13:28:00.882106 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.288518 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:01 old-k8s-version-856421 kubelet[667]: E0407 13:28:01.881859 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.288842 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:13 old-k8s-version-856421 kubelet[667]: E0407 13:28:13.882356 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.291435 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:15 old-k8s-version-856421 kubelet[667]: E0407 13:28:15.895177 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:36.291774 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:24 old-k8s-version-856421 kubelet[667]: E0407 13:28:24.882233 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.291963 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:27 old-k8s-version-856421 kubelet[667]: E0407 13:28:27.882283 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.292546 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:36 old-k8s-version-856421 kubelet[667]: E0407 13:28:36.680281 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.292729 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:38 old-k8s-version-856421 kubelet[667]: E0407 13:28:38.882208 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.293054 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:39 old-k8s-version-856421 kubelet[667]: E0407 13:28:39.601171 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.293238 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:50 old-k8s-version-856421 kubelet[667]: E0407 13:28:50.882465 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.293561 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:52 old-k8s-version-856421 kubelet[667]: E0407 13:28:52.882220 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.293753 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:01 old-k8s-version-856421 kubelet[667]: E0407 13:29:01.882101 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.294080 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:04 old-k8s-version-856421 kubelet[667]: E0407 13:29:04.885771 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.294263 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:15 old-k8s-version-856421 kubelet[667]: E0407 13:29:15.882105 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.294592 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: E0407 13:29:19.881643 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.294775 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:28 old-k8s-version-856421 kubelet[667]: E0407 13:29:28.884253 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.295099 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: E0407 13:29:32.882068 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.295282 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:39 old-k8s-version-856421 kubelet[667]: E0407 13:29:39.883031 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.295613 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: E0407 13:29:47.882527 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.295795 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:50 old-k8s-version-856421 kubelet[667]: E0407 13:29:50.882583 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.296119 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: E0407 13:29:59.882436 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.296304 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:01 old-k8s-version-856421 kubelet[667]: E0407 13:30:01.885267 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.296653 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.296836 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.297171 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.297353 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.297778 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
I0407 13:30:36.297794 1095137 logs.go:123] Gathering logs for describe nodes ...
I0407 13:30:36.297813 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0407 13:30:36.492001 1095137 logs.go:123] Gathering logs for kube-proxy [77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b] ...
I0407 13:30:36.492107 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
I0407 13:30:36.541166 1095137 logs.go:123] Gathering logs for kube-controller-manager [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df] ...
I0407 13:30:36.541195 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
I0407 13:30:36.602522 1095137 logs.go:123] Gathering logs for kube-controller-manager [2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2] ...
I0407 13:30:36.602560 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
I0407 13:30:36.668156 1095137 logs.go:123] Gathering logs for kindnet [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b] ...
I0407 13:30:36.668194 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
I0407 13:30:36.712474 1095137 logs.go:123] Gathering logs for kubernetes-dashboard [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625] ...
I0407 13:30:36.712504 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
I0407 13:30:36.751343 1095137 logs.go:123] Gathering logs for containerd ...
I0407 13:30:36.751370 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0407 13:30:36.817010 1095137 logs.go:123] Gathering logs for dmesg ...
I0407 13:30:36.817095 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0407 13:30:36.841811 1095137 out.go:358] Setting ErrFile to fd 2...
I0407 13:30:36.841838 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0407 13:30:36.841885 1095137 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0407 13:30:36.841897 1095137 out.go:270] Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.841903 1095137 out.go:270] Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.841918 1095137 out.go:270] Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:36.841924 1095137 out.go:270] Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:36.841939 1095137 out.go:270] Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
I0407 13:30:36.841946 1095137 out.go:358] Setting ErrFile to fd 2...
I0407 13:30:36.841952 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:30:46.842901 1095137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 13:30:46.870854 1095137 api_server.go:72] duration metric: took 5m51.753710743s to wait for apiserver process to appear ...
I0407 13:30:46.870880 1095137 api_server.go:88] waiting for apiserver healthz status ...
I0407 13:30:46.870915 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0407 13:30:46.870969 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0407 13:30:46.986233 1095137 cri.go:89] found id: "a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
I0407 13:30:46.986252 1095137 cri.go:89] found id: "d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
I0407 13:30:46.986257 1095137 cri.go:89] found id: ""
I0407 13:30:46.986264 1095137 logs.go:282] 2 containers: [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b]
I0407 13:30:46.986340 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:46.990308 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:46.993838 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0407 13:30:46.993911 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0407 13:30:47.052280 1095137 cri.go:89] found id: "5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
I0407 13:30:47.052300 1095137 cri.go:89] found id: "ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
I0407 13:30:47.052305 1095137 cri.go:89] found id: ""
I0407 13:30:47.052313 1095137 logs.go:282] 2 containers: [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735]
I0407 13:30:47.052369 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.056223 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.059720 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0407 13:30:47.059794 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0407 13:30:47.130170 1095137 cri.go:89] found id: "051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
I0407 13:30:47.130191 1095137 cri.go:89] found id: "e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
I0407 13:30:47.130196 1095137 cri.go:89] found id: ""
I0407 13:30:47.130204 1095137 logs.go:282] 2 containers: [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce]
I0407 13:30:47.130261 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.134245 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.143189 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0407 13:30:47.143271 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0407 13:30:47.202603 1095137 cri.go:89] found id: "d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
I0407 13:30:47.202625 1095137 cri.go:89] found id: "c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
I0407 13:30:47.202630 1095137 cri.go:89] found id: ""
I0407 13:30:47.202637 1095137 logs.go:282] 2 containers: [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030]
I0407 13:30:47.202699 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.206762 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.210646 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0407 13:30:47.210745 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0407 13:30:47.284058 1095137 cri.go:89] found id: "74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
I0407 13:30:47.284131 1095137 cri.go:89] found id: "77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
I0407 13:30:47.284150 1095137 cri.go:89] found id: ""
I0407 13:30:47.284173 1095137 logs.go:282] 2 containers: [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b]
I0407 13:30:47.284264 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.290441 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.294067 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0407 13:30:47.294179 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0407 13:30:47.342560 1095137 cri.go:89] found id: "04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
I0407 13:30:47.342628 1095137 cri.go:89] found id: "2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
I0407 13:30:47.342646 1095137 cri.go:89] found id: ""
I0407 13:30:47.342669 1095137 logs.go:282] 2 containers: [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2]
I0407 13:30:47.342765 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.346752 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.351671 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0407 13:30:47.351794 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0407 13:30:47.412231 1095137 cri.go:89] found id: "e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
I0407 13:30:47.412307 1095137 cri.go:89] found id: "b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
I0407 13:30:47.412328 1095137 cri.go:89] found id: ""
I0407 13:30:47.412350 1095137 logs.go:282] 2 containers: [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb]
I0407 13:30:47.412437 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.416534 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.420684 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0407 13:30:47.420804 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0407 13:30:47.473376 1095137 cri.go:89] found id: "3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
I0407 13:30:47.473454 1095137 cri.go:89] found id: ""
I0407 13:30:47.473475 1095137 logs.go:282] 1 containers: [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625]
I0407 13:30:47.473560 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.477965 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0407 13:30:47.478087 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0407 13:30:47.526054 1095137 cri.go:89] found id: "2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
I0407 13:30:47.526129 1095137 cri.go:89] found id: "d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
I0407 13:30:47.526148 1095137 cri.go:89] found id: ""
I0407 13:30:47.526170 1095137 logs.go:282] 2 containers: [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849]
I0407 13:30:47.526254 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.531086 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.534990 1095137 logs.go:123] Gathering logs for describe nodes ...
I0407 13:30:47.535062 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0407 13:30:47.736664 1095137 logs.go:123] Gathering logs for coredns [e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce] ...
I0407 13:30:47.736704 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
I0407 13:30:47.789228 1095137 logs.go:123] Gathering logs for kubernetes-dashboard [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625] ...
I0407 13:30:47.789262 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
I0407 13:30:47.866453 1095137 logs.go:123] Gathering logs for storage-provisioner [d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849] ...
I0407 13:30:47.866486 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
I0407 13:30:47.912587 1095137 logs.go:123] Gathering logs for container status ...
I0407 13:30:47.912618 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0407 13:30:47.991125 1095137 logs.go:123] Gathering logs for kubelet ...
I0407 13:30:47.991154 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0407 13:30:48.065207 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307390 667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.065569 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307927 667 reflector.go:138] object-"kube-system"/"kube-proxy-token-j6crq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j6crq" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.065836 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308116 667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.066068 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308292 667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-nvxlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-nvxlj" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.066279 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308445 667 reflector.go:138] object-"default"/"default-token-znh7g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-znh7g" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.066499 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308590 667 reflector.go:138] object-"kube-system"/"kindnet-token-fxnc5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-fxnc5" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.066775 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308753 667 reflector.go:138] object-"kube-system"/"coredns-token-sjxkg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-sjxkg" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.072894 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:15 old-k8s-version-856421 kubelet[667]: E0407 13:25:15.094522 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.078579 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:16 old-k8s-version-856421 kubelet[667]: E0407 13:25:16.056738 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.082673 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:27 old-k8s-version-856421 kubelet[667]: E0407 13:25:27.890508 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.084370 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:40 old-k8s-version-856421 kubelet[667]: E0407 13:25:40.901804 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.084966 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:42 old-k8s-version-856421 kubelet[667]: E0407 13:25:42.190528 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.085629 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:43 old-k8s-version-856421 kubelet[667]: E0407 13:25:43.194128 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.086140 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:47 old-k8s-version-856421 kubelet[667]: E0407 13:25:47.208836 667 pod_workers.go:191] Error syncing pod ffa09209-8141-4692-8b43-e212485a4adb ("storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"
W0407 13:30:48.086480 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:49 old-k8s-version-856421 kubelet[667]: E0407 13:25:49.601173 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.089328 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:55 old-k8s-version-856421 kubelet[667]: E0407 13:25:55.894550 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.090071 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:02 old-k8s-version-856421 kubelet[667]: E0407 13:26:02.259589 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.090263 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:07 old-k8s-version-856421 kubelet[667]: E0407 13:26:07.882067 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.090596 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:09 old-k8s-version-856421 kubelet[667]: E0407 13:26:09.601119 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.090781 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:21 old-k8s-version-856421 kubelet[667]: E0407 13:26:21.882035 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.091367 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:23 old-k8s-version-856421 kubelet[667]: E0407 13:26:23.333979 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.091693 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:29 old-k8s-version-856421 kubelet[667]: E0407 13:26:29.601138 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.091878 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:33 old-k8s-version-856421 kubelet[667]: E0407 13:26:33.882060 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.092204 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:42 old-k8s-version-856421 kubelet[667]: E0407 13:26:42.882285 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.094653 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:46 old-k8s-version-856421 kubelet[667]: E0407 13:26:46.916880 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.094984 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:53 old-k8s-version-856421 kubelet[667]: E0407 13:26:53.881641 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.095172 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:58 old-k8s-version-856421 kubelet[667]: E0407 13:26:58.887165 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.095764 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:05 old-k8s-version-856421 kubelet[667]: E0407 13:27:05.451459 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.096091 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:09 old-k8s-version-856421 kubelet[667]: E0407 13:27:09.601083 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.096275 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:11 old-k8s-version-856421 kubelet[667]: E0407 13:27:11.882020 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.096603 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:22 old-k8s-version-856421 kubelet[667]: E0407 13:27:22.882870 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.096788 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:24 old-k8s-version-856421 kubelet[667]: E0407 13:27:24.883611 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.097167 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:33 old-k8s-version-856421 kubelet[667]: E0407 13:27:33.881645 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.097363 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:36 old-k8s-version-856421 kubelet[667]: E0407 13:27:36.883495 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.097692 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:46 old-k8s-version-856421 kubelet[667]: E0407 13:27:46.882237 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.097891 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:49 old-k8s-version-856421 kubelet[667]: E0407 13:27:49.882049 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.098077 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:00 old-k8s-version-856421 kubelet[667]: E0407 13:28:00.882106 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.098408 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:01 old-k8s-version-856421 kubelet[667]: E0407 13:28:01.881859 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.098735 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:13 old-k8s-version-856421 kubelet[667]: E0407 13:28:13.882356 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.101174 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:15 old-k8s-version-856421 kubelet[667]: E0407 13:28:15.895177 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.101500 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:24 old-k8s-version-856421 kubelet[667]: E0407 13:28:24.882233 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.101683 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:27 old-k8s-version-856421 kubelet[667]: E0407 13:28:27.882283 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.102314 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:36 old-k8s-version-856421 kubelet[667]: E0407 13:28:36.680281 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.102503 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:38 old-k8s-version-856421 kubelet[667]: E0407 13:28:38.882208 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.102831 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:39 old-k8s-version-856421 kubelet[667]: E0407 13:28:39.601171 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.103015 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:50 old-k8s-version-856421 kubelet[667]: E0407 13:28:50.882465 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.103343 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:52 old-k8s-version-856421 kubelet[667]: E0407 13:28:52.882220 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.103529 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:01 old-k8s-version-856421 kubelet[667]: E0407 13:29:01.882101 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.103856 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:04 old-k8s-version-856421 kubelet[667]: E0407 13:29:04.885771 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.104040 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:15 old-k8s-version-856421 kubelet[667]: E0407 13:29:15.882105 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.104366 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: E0407 13:29:19.881643 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.104552 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:28 old-k8s-version-856421 kubelet[667]: E0407 13:29:28.884253 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.105009 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: E0407 13:29:32.882068 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.105201 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:39 old-k8s-version-856421 kubelet[667]: E0407 13:29:39.883031 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.105541 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: E0407 13:29:47.882527 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.105739 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:50 old-k8s-version-856421 kubelet[667]: E0407 13:29:50.882583 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.106067 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: E0407 13:29:59.882436 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.106251 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:01 old-k8s-version-856421 kubelet[667]: E0407 13:30:01.885267 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.106586 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.106770 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.107101 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.107285 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.107610 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.107794 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:30:48.107807 1095137 logs.go:123] Gathering logs for kube-proxy [77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b] ...
I0407 13:30:48.107822 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
I0407 13:30:48.156575 1095137 logs.go:123] Gathering logs for kindnet [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b] ...
I0407 13:30:48.156606 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
I0407 13:30:48.232444 1095137 logs.go:123] Gathering logs for kindnet [b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb] ...
I0407 13:30:48.232472 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
I0407 13:30:48.305914 1095137 logs.go:123] Gathering logs for kube-apiserver [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af] ...
I0407 13:30:48.305993 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
I0407 13:30:48.379011 1095137 logs.go:123] Gathering logs for kube-apiserver [d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b] ...
I0407 13:30:48.379086 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
I0407 13:30:48.462552 1095137 logs.go:123] Gathering logs for etcd [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5] ...
I0407 13:30:48.462584 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
I0407 13:30:48.528785 1095137 logs.go:123] Gathering logs for kube-scheduler [c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030] ...
I0407 13:30:48.528974 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
I0407 13:30:48.589264 1095137 logs.go:123] Gathering logs for kube-controller-manager [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df] ...
I0407 13:30:48.589336 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
I0407 13:30:48.680565 1095137 logs.go:123] Gathering logs for kube-controller-manager [2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2] ...
I0407 13:30:48.680604 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
I0407 13:30:48.779599 1095137 logs.go:123] Gathering logs for storage-provisioner [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61] ...
I0407 13:30:48.779675 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
I0407 13:30:48.835392 1095137 logs.go:123] Gathering logs for containerd ...
I0407 13:30:48.835418 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0407 13:30:48.929349 1095137 logs.go:123] Gathering logs for dmesg ...
I0407 13:30:48.929382 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0407 13:30:48.954396 1095137 logs.go:123] Gathering logs for etcd [ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735] ...
I0407 13:30:48.954423 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
I0407 13:30:49.030928 1095137 logs.go:123] Gathering logs for coredns [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a] ...
I0407 13:30:49.031024 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
I0407 13:30:49.110624 1095137 logs.go:123] Gathering logs for kube-scheduler [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7] ...
I0407 13:30:49.110700 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
I0407 13:30:49.161794 1095137 logs.go:123] Gathering logs for kube-proxy [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088] ...
I0407 13:30:49.161888 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
I0407 13:30:49.226058 1095137 out.go:358] Setting ErrFile to fd 2...
I0407 13:30:49.226135 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0407 13:30:49.226216 1095137 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0407 13:30:49.226386 1095137 out.go:270] Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:49.226426 1095137 out.go:270] Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:49.226481 1095137 out.go:270] Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:49.226514 1095137 out.go:270] Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:49.226557 1095137 out.go:270] Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:30:49.226602 1095137 out.go:358] Setting ErrFile to fd 2...
I0407 13:30:49.226640 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:30:59.228033 1095137 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0407 13:30:59.239760 1095137 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0407 13:30:59.244653 1095137 out.go:201]
W0407 13:30:59.247535 1095137 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0407 13:30:59.247763 1095137 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0407 13:30:59.247829 1095137 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0407 13:30:59.247882 1095137 out.go:270] *
*
W0407 13:30:59.248818 1095137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0407 13:30:59.252447 1095137 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-856421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-856421
helpers_test.go:235: (dbg) docker inspect old-k8s-version-856421:
-- stdout --
[
{
"Id": "0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0",
"Created": "2025-04-07T13:21:37.743742867Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1095306,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-07T13:24:46.516691813Z",
"FinishedAt": "2025-04-07T13:24:45.494837311Z"
},
"Image": "sha256:1a97cd9e9bbab266425b883d3ed87fee4969302ed9a49ce4df4bf460f6bbf404",
"ResolvConfPath": "/var/lib/docker/containers/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0/hostname",
"HostsPath": "/var/lib/docker/containers/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0/hosts",
"LogPath": "/var/lib/docker/containers/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0/0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0-json.log",
"Name": "/old-k8s-version-856421",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-856421:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-856421",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "0ec7499281b1af3975317f28c2f45ab23ffdf6f53892f067820d159417dc17d0",
"LowerDir": "/var/lib/docker/overlay2/3f8cb1cfa6829451e2d68ed2a44b6f349cb628ec76ddd639b24aac6efa846b9f-init/diff:/var/lib/docker/overlay2/85f90d92e092517cca50dbac98636b783956eaa528934db46fb23992a850b0ad/diff",
"MergedDir": "/var/lib/docker/overlay2/3f8cb1cfa6829451e2d68ed2a44b6f349cb628ec76ddd639b24aac6efa846b9f/merged",
"UpperDir": "/var/lib/docker/overlay2/3f8cb1cfa6829451e2d68ed2a44b6f349cb628ec76ddd639b24aac6efa846b9f/diff",
"WorkDir": "/var/lib/docker/overlay2/3f8cb1cfa6829451e2d68ed2a44b6f349cb628ec76ddd639b24aac6efa846b9f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "old-k8s-version-856421",
"Source": "/var/lib/docker/volumes/old-k8s-version-856421/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "old-k8s-version-856421",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-856421",
"name.minikube.sigs.k8s.io": "old-k8s-version-856421",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c93c015a8f4613610fe06f13d793b5e51fad2752271eba1152ee8674fb2da0ea",
"SandboxKey": "/var/run/docker/netns/c93c015a8f46",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34180"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34181"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34184"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34182"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34183"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-856421": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "5a:54:29:5c:23:cd",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "84e74c9d326f8cf1ccd793b6c9408d565d75088ea9d7271ce39b18e3801f5b6e",
"EndpointID": "23bae7889b8bb363aa570e2103614084d7f01195d0aa49291700a361d986a40a",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-856421",
"0ec7499281b1"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-856421 -n old-k8s-version-856421
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-856421 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-856421 logs -n 25: (2.966783748s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| ssh | cert-options-839524 ssh | cert-options-839524 | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-839524 -- sudo | cert-options-839524 | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-839524 | cert-options-839524 | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
| start | -p old-k8s-version-856421 | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:24 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-618228 | cert-expiration-618228 | jenkins | v1.35.0 | 07 Apr 25 13:22 UTC | 07 Apr 25 13:23 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-618228 | cert-expiration-618228 | jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:23 UTC |
| start | -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:24 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable metrics-server -p old-k8s-version-856421 | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-856421 | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:29 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable dashboard -p old-k8s-version-856421 | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-856421 | old-k8s-version-856421 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| image | no-preload-789804 image list | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
| delete | -p no-preload-789804 | no-preload-789804 | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:29 UTC |
| start | -p embed-certs-688390 | embed-certs-688390 | jenkins | v1.35.0 | 07 Apr 25 13:29 UTC | 07 Apr 25 13:30 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p embed-certs-688390 | embed-certs-688390 | jenkins | v1.35.0 | 07 Apr 25 13:30 UTC | 07 Apr 25 13:30 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p embed-certs-688390 | embed-certs-688390 | jenkins | v1.35.0 | 07 Apr 25 13:30 UTC | 07 Apr 25 13:30 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p embed-certs-688390 | embed-certs-688390 | jenkins | v1.35.0 | 07 Apr 25 13:30 UTC | 07 Apr 25 13:30 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-688390 | embed-certs-688390 | jenkins | v1.35.0 | 07 Apr 25 13:30 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/07 13:30:41
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0407 13:30:41.565899 1107590 out.go:345] Setting OutFile to fd 1 ...
I0407 13:30:41.566046 1107590 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:30:41.566070 1107590 out.go:358] Setting ErrFile to fd 2...
I0407 13:30:41.566092 1107590 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:30:41.566388 1107590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-873072/.minikube/bin
I0407 13:30:41.566803 1107590 out.go:352] Setting JSON to false
I0407 13:30:41.567890 1107590 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18786,"bootTime":1744013856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0407 13:30:41.567962 1107590 start.go:139] virtualization:
I0407 13:30:41.572889 1107590 out.go:177] * [embed-certs-688390] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0407 13:30:41.576010 1107590 out.go:177] - MINIKUBE_LOCATION=20602
I0407 13:30:41.576036 1107590 notify.go:220] Checking for updates...
I0407 13:30:41.579151 1107590 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0407 13:30:41.582075 1107590 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20602-873072/kubeconfig
I0407 13:30:41.585126 1107590 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-873072/.minikube
I0407 13:30:41.588117 1107590 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0407 13:30:41.591038 1107590 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0407 13:30:41.594523 1107590 config.go:182] Loaded profile config "embed-certs-688390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 13:30:41.595104 1107590 driver.go:394] Setting default libvirt URI to qemu:///system
I0407 13:30:41.621022 1107590 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0407 13:30:41.621172 1107590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 13:30:41.683285 1107590 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:30:41.673009013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0407 13:30:41.683397 1107590 docker.go:318] overlay module found
I0407 13:30:41.686547 1107590 out.go:177] * Using the docker driver based on existing profile
I0407 13:30:41.689509 1107590 start.go:297] selected driver: docker
I0407 13:30:41.689535 1107590 start.go:901] validating driver "docker" against &{Name:embed-certs-688390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:30:41.689653 1107590 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0407 13:30:41.690420 1107590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 13:30:41.753168 1107590 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:30:41.743995038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0407 13:30:41.753512 1107590 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0407 13:30:41.753545 1107590 cni.go:84] Creating CNI manager for ""
I0407 13:30:41.753606 1107590 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0407 13:30:41.753653 1107590 start.go:340] cluster config:
{Name:embed-certs-688390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:30:41.758727 1107590 out.go:177] * Starting "embed-certs-688390" primary control-plane node in "embed-certs-688390" cluster
I0407 13:30:41.761636 1107590 cache.go:121] Beginning downloading kic base image for docker with containerd
I0407 13:30:41.764759 1107590 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
I0407 13:30:41.767613 1107590 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0407 13:30:41.767682 1107590 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
I0407 13:30:41.767691 1107590 cache.go:56] Caching tarball of preloaded images
I0407 13:30:41.767734 1107590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
I0407 13:30:41.767793 1107590 preload.go:172] Found /home/jenkins/minikube-integration/20602-873072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0407 13:30:41.767803 1107590 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0407 13:30:41.767926 1107590 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/config.json ...
I0407 13:30:41.788110 1107590 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
I0407 13:30:41.788134 1107590 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
I0407 13:30:41.788152 1107590 cache.go:230] Successfully downloaded all kic artifacts
I0407 13:30:41.788175 1107590 start.go:360] acquireMachinesLock for embed-certs-688390: {Name:mk224d0616c94c039dbad0154f78977cda80f3b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:30:41.788262 1107590 start.go:364] duration metric: took 57.289µs to acquireMachinesLock for "embed-certs-688390"
I0407 13:30:41.788293 1107590 start.go:96] Skipping create...Using existing machine configuration
I0407 13:30:41.788351 1107590 fix.go:54] fixHost starting:
I0407 13:30:41.788611 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
I0407 13:30:41.805616 1107590 fix.go:112] recreateIfNeeded on embed-certs-688390: state=Stopped err=<nil>
W0407 13:30:41.805649 1107590 fix.go:138] unexpected machine state, will restart: <nil>
I0407 13:30:41.808804 1107590 out.go:177] * Restarting existing docker container for "embed-certs-688390" ...
I0407 13:30:41.811783 1107590 cli_runner.go:164] Run: docker start embed-certs-688390
I0407 13:30:42.127773 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
I0407 13:30:42.157334 1107590 kic.go:430] container "embed-certs-688390" state is running.
I0407 13:30:42.157838 1107590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688390
I0407 13:30:42.183983 1107590 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/config.json ...
I0407 13:30:42.184375 1107590 machine.go:93] provisionDockerMachine start ...
I0407 13:30:42.184469 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:42.219282 1107590 main.go:141] libmachine: Using SSH client type: native
I0407 13:30:42.219728 1107590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34190 <nil> <nil>}
I0407 13:30:42.219744 1107590 main.go:141] libmachine: About to run SSH command:
hostname
I0407 13:30:42.220548 1107590 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35454->127.0.0.1:34190: read: connection reset by peer
I0407 13:30:45.382821 1107590 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-688390
I0407 13:30:45.382926 1107590 ubuntu.go:169] provisioning hostname "embed-certs-688390"
I0407 13:30:45.383037 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:45.404547 1107590 main.go:141] libmachine: Using SSH client type: native
I0407 13:30:45.405146 1107590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34190 <nil> <nil>}
I0407 13:30:45.405167 1107590 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-688390 && echo "embed-certs-688390" | sudo tee /etc/hostname
I0407 13:30:45.547583 1107590 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-688390
I0407 13:30:45.547669 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:45.566078 1107590 main.go:141] libmachine: Using SSH client type: native
I0407 13:30:45.566411 1107590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34190 <nil> <nil>}
I0407 13:30:45.566435 1107590 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-688390' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-688390/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-688390' | sudo tee -a /etc/hosts;
fi
fi
I0407 13:30:45.690149 1107590 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0407 13:30:45.690176 1107590 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20602-873072/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-873072/.minikube}
I0407 13:30:45.690200 1107590 ubuntu.go:177] setting up certificates
I0407 13:30:45.690210 1107590 provision.go:84] configureAuth start
I0407 13:30:45.690274 1107590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688390
I0407 13:30:45.709416 1107590 provision.go:143] copyHostCerts
I0407 13:30:45.709488 1107590 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem, removing ...
I0407 13:30:45.709513 1107590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem
I0407 13:30:45.709592 1107590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/ca.pem (1078 bytes)
I0407 13:30:45.709764 1107590 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem, removing ...
I0407 13:30:45.709776 1107590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem
I0407 13:30:45.709813 1107590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/cert.pem (1123 bytes)
I0407 13:30:45.709892 1107590 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem, removing ...
I0407 13:30:45.709902 1107590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem
I0407 13:30:45.709936 1107590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-873072/.minikube/key.pem (1675 bytes)
I0407 13:30:45.710001 1107590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem org=jenkins.embed-certs-688390 san=[127.0.0.1 192.168.85.2 embed-certs-688390 localhost minikube]
I0407 13:30:46.055120 1107590 provision.go:177] copyRemoteCerts
I0407 13:30:46.055193 1107590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0407 13:30:46.055234 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:46.073901 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:46.162931 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0407 13:30:46.188052 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0407 13:30:46.214271 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0407 13:30:46.244649 1107590 provision.go:87] duration metric: took 554.420681ms to configureAuth
I0407 13:30:46.244719 1107590 ubuntu.go:193] setting minikube options for container-runtime
I0407 13:30:46.244946 1107590 config.go:182] Loaded profile config "embed-certs-688390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 13:30:46.244964 1107590 machine.go:96] duration metric: took 4.06057611s to provisionDockerMachine
I0407 13:30:46.244974 1107590 start.go:293] postStartSetup for "embed-certs-688390" (driver="docker")
I0407 13:30:46.244985 1107590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0407 13:30:46.245038 1107590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0407 13:30:46.245091 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:46.262856 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:46.359623 1107590 ssh_runner.go:195] Run: cat /etc/os-release
I0407 13:30:46.363088 1107590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0407 13:30:46.363126 1107590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0407 13:30:46.363137 1107590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0407 13:30:46.363144 1107590 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0407 13:30:46.363154 1107590 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-873072/.minikube/addons for local assets ...
I0407 13:30:46.363209 1107590 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-873072/.minikube/files for local assets ...
I0407 13:30:46.363294 1107590 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem -> 8785942.pem in /etc/ssl/certs
I0407 13:30:46.363413 1107590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0407 13:30:46.372751 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem --> /etc/ssl/certs/8785942.pem (1708 bytes)
I0407 13:30:46.398518 1107590 start.go:296] duration metric: took 153.528422ms for postStartSetup
I0407 13:30:46.398657 1107590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0407 13:30:46.398707 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:46.416799 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:46.502710 1107590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0407 13:30:46.507291 1107590 fix.go:56] duration metric: took 4.718931945s for fixHost
I0407 13:30:46.507316 1107590 start.go:83] releasing machines lock for "embed-certs-688390", held for 4.719039532s
I0407 13:30:46.507381 1107590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688390
I0407 13:30:46.525049 1107590 ssh_runner.go:195] Run: cat /version.json
I0407 13:30:46.525108 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:46.525357 1107590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0407 13:30:46.525405 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:46.556516 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:46.559715 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:46.781232 1107590 ssh_runner.go:195] Run: systemctl --version
I0407 13:30:46.785892 1107590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0407 13:30:46.790700 1107590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0407 13:30:46.809689 1107590 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0407 13:30:46.809788 1107590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0407 13:30:46.820522 1107590 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0407 13:30:46.820545 1107590 start.go:495] detecting cgroup driver to use...
I0407 13:30:46.820578 1107590 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 13:30:46.820629 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0407 13:30:46.838341 1107590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0407 13:30:46.861583 1107590 docker.go:217] disabling cri-docker service (if available) ...
I0407 13:30:46.861721 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0407 13:30:46.885229 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0407 13:30:46.898634 1107590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0407 13:30:47.026720 1107590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0407 13:30:47.153939 1107590 docker.go:233] disabling docker service ...
I0407 13:30:47.154062 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0407 13:30:47.170436 1107590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0407 13:30:47.183693 1107590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0407 13:30:47.315527 1107590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0407 13:30:47.449767 1107590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0407 13:30:47.463770 1107590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 13:30:47.486900 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0407 13:30:47.500570 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0407 13:30:47.514776 1107590 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0407 13:30:47.514851 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0407 13:30:47.530042 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 13:30:47.542443 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0407 13:30:47.558973 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 13:30:47.570359 1107590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0407 13:30:47.581007 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0407 13:30:47.592571 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0407 13:30:47.604567 1107590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0407 13:30:47.616891 1107590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0407 13:30:47.627954 1107590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0407 13:30:47.638405 1107590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:30:47.764752 1107590 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0407 13:30:48.010970 1107590 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0407 13:30:48.011070 1107590 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0407 13:30:48.016902 1107590 start.go:563] Will wait 60s for crictl version
I0407 13:30:48.017002 1107590 ssh_runner.go:195] Run: which crictl
I0407 13:30:48.030582 1107590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0407 13:30:48.119713 1107590 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0407 13:30:48.119802 1107590 ssh_runner.go:195] Run: containerd --version
I0407 13:30:48.158606 1107590 ssh_runner.go:195] Run: containerd --version
I0407 13:30:48.194030 1107590 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.27 ...
I0407 13:30:48.196953 1107590 cli_runner.go:164] Run: docker network inspect embed-certs-688390 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 13:30:48.222137 1107590 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0407 13:30:48.226517 1107590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 13:30:48.242012 1107590 kubeadm.go:883] updating cluster {Name:embed-certs-688390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0407 13:30:48.242151 1107590 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0407 13:30:48.242209 1107590 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 13:30:48.294584 1107590 containerd.go:627] all images are preloaded for containerd runtime.
I0407 13:30:48.294605 1107590 containerd.go:534] Images already preloaded, skipping extraction
I0407 13:30:48.294670 1107590 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 13:30:48.372573 1107590 containerd.go:627] all images are preloaded for containerd runtime.
I0407 13:30:48.372598 1107590 cache_images.go:84] Images are preloaded, skipping loading
I0407 13:30:48.372606 1107590 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
I0407 13:30:48.372709 1107590 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-688390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0407 13:30:48.372782 1107590 ssh_runner.go:195] Run: sudo crictl info
I0407 13:30:48.424826 1107590 cni.go:84] Creating CNI manager for ""
I0407 13:30:48.424856 1107590 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0407 13:30:48.424867 1107590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0407 13:30:48.424889 1107590 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-688390 NodeName:embed-certs-688390 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0407 13:30:48.425005 1107590 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-688390"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0407 13:30:48.425090 1107590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0407 13:30:48.451243 1107590 binaries.go:44] Found k8s binaries, skipping transfer
I0407 13:30:48.451319 1107590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0407 13:30:48.467426 1107590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0407 13:30:48.492121 1107590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0407 13:30:48.514477 1107590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0407 13:30:48.538408 1107590 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0407 13:30:48.543736 1107590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 13:30:48.559395 1107590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:30:48.687669 1107590 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 13:30:48.705818 1107590 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390 for IP: 192.168.85.2
I0407 13:30:48.705889 1107590 certs.go:194] generating shared ca certs ...
I0407 13:30:48.705920 1107590 certs.go:226] acquiring lock for ca certs: {Name:mk03094d90434f2a42c24ebaddfee021594c5911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:30:48.706080 1107590 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-873072/.minikube/ca.key
I0407 13:30:48.706168 1107590 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.key
I0407 13:30:48.706193 1107590 certs.go:256] generating profile certs ...
I0407 13:30:48.706312 1107590 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/client.key
I0407 13:30:48.706432 1107590 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/apiserver.key.bc2ed1e9
I0407 13:30:48.706521 1107590 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/proxy-client.key
I0407 13:30:48.706662 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594.pem (1338 bytes)
W0407 13:30:48.706735 1107590 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594_empty.pem, impossibly tiny 0 bytes
I0407 13:30:48.706762 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca-key.pem (1675 bytes)
I0407 13:30:48.706816 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/ca.pem (1078 bytes)
I0407 13:30:48.706860 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/cert.pem (1123 bytes)
I0407 13:30:48.706913 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/certs/key.pem (1675 bytes)
I0407 13:30:48.706981 1107590 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem (1708 bytes)
I0407 13:30:48.707616 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0407 13:30:48.774365 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0407 13:30:48.826798 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0407 13:30:48.868159 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0407 13:30:48.946198 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0407 13:30:49.015151 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0407 13:30:49.083090 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0407 13:30:49.137662 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/profiles/embed-certs-688390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0407 13:30:49.172336 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/files/etc/ssl/certs/8785942.pem --> /usr/share/ca-certificates/8785942.pem (1708 bytes)
I0407 13:30:49.204223 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0407 13:30:49.236211 1107590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-873072/.minikube/certs/878594.pem --> /usr/share/ca-certificates/878594.pem (1338 bytes)
I0407 13:30:49.262654 1107590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0407 13:30:49.282028 1107590 ssh_runner.go:195] Run: openssl version
I0407 13:30:49.288180 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8785942.pem && ln -fs /usr/share/ca-certificates/8785942.pem /etc/ssl/certs/8785942.pem"
I0407 13:30:49.298535 1107590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8785942.pem
I0407 13:30:49.302471 1107590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 7 12:44 /usr/share/ca-certificates/8785942.pem
I0407 13:30:49.302595 1107590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8785942.pem
I0407 13:30:49.310206 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8785942.pem /etc/ssl/certs/3ec20f2e.0"
I0407 13:30:49.320210 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0407 13:30:49.330434 1107590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0407 13:30:49.334347 1107590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 7 12:37 /usr/share/ca-certificates/minikubeCA.pem
I0407 13:30:49.334432 1107590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0407 13:30:49.342273 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0407 13:30:49.351924 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/878594.pem && ln -fs /usr/share/ca-certificates/878594.pem /etc/ssl/certs/878594.pem"
I0407 13:30:49.362078 1107590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/878594.pem
I0407 13:30:49.365810 1107590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 7 12:44 /usr/share/ca-certificates/878594.pem
I0407 13:30:49.365920 1107590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/878594.pem
I0407 13:30:49.373219 1107590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/878594.pem /etc/ssl/certs/51391683.0"
I0407 13:30:49.382767 1107590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0407 13:30:49.386569 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0407 13:30:49.394772 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0407 13:30:49.402041 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0407 13:30:49.409140 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0407 13:30:49.417225 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0407 13:30:49.426547 1107590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0407 13:30:49.434287 1107590 kubeadm.go:392] StartCluster: {Name:embed-certs-688390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-688390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:30:49.434430 1107590 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0407 13:30:49.434498 1107590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0407 13:30:49.492549 1107590 cri.go:89] found id: "8667721e19675d05e2d11ed1e8ec92d4fb1005b2c5d6fb55a214d0dcf5a81a5e"
I0407 13:30:49.492575 1107590 cri.go:89] found id: "76558dbc4781d47881c50d48dd6bb28d54a860e776f29e297c8a31f9fe9cb90a"
I0407 13:30:49.492580 1107590 cri.go:89] found id: "7c7b4a0911eba6af046b71620c7c34ee6045e6dac2e779f492e07ee08922bac7"
I0407 13:30:49.492584 1107590 cri.go:89] found id: "6e5e7a1630068454fed3bd5b4b9ccd1c7c9d04dad311e54b96273ed75f41ece6"
I0407 13:30:49.492588 1107590 cri.go:89] found id: "dcef37010c5408321cc328b9c8c7066cd42c0f6012ecebef69dff33c682efaeb"
I0407 13:30:49.492594 1107590 cri.go:89] found id: "ae9061ad7363ded84522777bef558bcc6facfc004b1953a0a52a987f2585ca5c"
I0407 13:30:49.492598 1107590 cri.go:89] found id: "503be695ecd9c154b0b6ea612b87be1113c54984dea50c2eb301b5baffe211b7"
I0407 13:30:49.492602 1107590 cri.go:89] found id: "8a59a835709e5467410288c94735b586d8c71c63d83f5912eef5b0f36f403634"
I0407 13:30:49.492605 1107590 cri.go:89] found id: ""
I0407 13:30:49.492661 1107590 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0407 13:30:49.512222 1107590 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-04-07T13:30:49Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0407 13:30:49.512385 1107590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0407 13:30:49.531140 1107590 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0407 13:30:49.531212 1107590 kubeadm.go:593] restartPrimaryControlPlane start ...
I0407 13:30:49.531315 1107590 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0407 13:30:49.542867 1107590 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0407 13:30:49.543628 1107590 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-688390" does not appear in /home/jenkins/minikube-integration/20602-873072/kubeconfig
I0407 13:30:49.544016 1107590 kubeconfig.go:62] /home/jenkins/minikube-integration/20602-873072/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-688390" cluster setting kubeconfig missing "embed-certs-688390" context setting]
I0407 13:30:49.544606 1107590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-873072/kubeconfig: {Name:mk9de2da01a51fd73232a20700f86bdc259a91ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:30:49.546555 1107590 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0407 13:30:49.573879 1107590 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0407 13:30:49.573980 1107590 kubeadm.go:597] duration metric: took 42.747109ms to restartPrimaryControlPlane
I0407 13:30:49.574007 1107590 kubeadm.go:394] duration metric: took 139.728981ms to StartCluster
I0407 13:30:49.574036 1107590 settings.go:142] acquiring lock: {Name:mk3e960f3698515246acbd5cb37ff276e0a43a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:30:49.574142 1107590 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20602-873072/kubeconfig
I0407 13:30:49.584958 1107590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-873072/kubeconfig: {Name:mk9de2da01a51fd73232a20700f86bdc259a91ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:30:49.585551 1107590 config.go:182] Loaded profile config "embed-certs-688390": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 13:30:49.585332 1107590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0407 13:30:49.585688 1107590 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0407 13:30:49.587136 1107590 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-688390"
I0407 13:30:49.587180 1107590 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-688390"
W0407 13:30:49.587217 1107590 addons.go:247] addon storage-provisioner should already be in state true
I0407 13:30:49.587267 1107590 host.go:66] Checking if "embed-certs-688390" exists ...
I0407 13:30:49.587835 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
I0407 13:30:49.588054 1107590 addons.go:69] Setting default-storageclass=true in profile "embed-certs-688390"
I0407 13:30:49.588095 1107590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-688390"
I0407 13:30:49.588289 1107590 addons.go:69] Setting metrics-server=true in profile "embed-certs-688390"
I0407 13:30:49.588302 1107590 addons.go:238] Setting addon metrics-server=true in "embed-certs-688390"
W0407 13:30:49.588309 1107590 addons.go:247] addon metrics-server should already be in state true
I0407 13:30:49.588328 1107590 host.go:66] Checking if "embed-certs-688390" exists ...
I0407 13:30:49.588720 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
I0407 13:30:49.589636 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
I0407 13:30:49.595502 1107590 out.go:177] * Verifying Kubernetes components...
I0407 13:30:49.589879 1107590 addons.go:69] Setting dashboard=true in profile "embed-certs-688390"
I0407 13:30:49.595937 1107590 addons.go:238] Setting addon dashboard=true in "embed-certs-688390"
W0407 13:30:49.595949 1107590 addons.go:247] addon dashboard should already be in state true
I0407 13:30:49.595990 1107590 host.go:66] Checking if "embed-certs-688390" exists ...
I0407 13:30:49.596435 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
I0407 13:30:49.599432 1107590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:30:49.669943 1107590 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0407 13:30:49.670067 1107590 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0407 13:30:49.672814 1107590 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0407 13:30:49.672854 1107590 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0407 13:30:49.672929 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:49.673234 1107590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:30:49.673245 1107590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0407 13:30:49.673286 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:49.702543 1107590 addons.go:238] Setting addon default-storageclass=true in "embed-certs-688390"
W0407 13:30:49.702569 1107590 addons.go:247] addon default-storageclass should already be in state true
I0407 13:30:49.702594 1107590 host.go:66] Checking if "embed-certs-688390" exists ...
I0407 13:30:49.703039 1107590 cli_runner.go:164] Run: docker container inspect embed-certs-688390 --format={{.State.Status}}
I0407 13:30:49.705457 1107590 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0407 13:30:49.716630 1107590 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0407 13:30:46.842901 1095137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 13:30:46.870854 1095137 api_server.go:72] duration metric: took 5m51.753710743s to wait for apiserver process to appear ...
I0407 13:30:46.870880 1095137 api_server.go:88] waiting for apiserver healthz status ...
I0407 13:30:46.870915 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0407 13:30:46.870969 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0407 13:30:46.986233 1095137 cri.go:89] found id: "a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
I0407 13:30:46.986252 1095137 cri.go:89] found id: "d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
I0407 13:30:46.986257 1095137 cri.go:89] found id: ""
I0407 13:30:46.986264 1095137 logs.go:282] 2 containers: [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b]
I0407 13:30:46.986340 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:46.990308 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:46.993838 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0407 13:30:46.993911 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0407 13:30:47.052280 1095137 cri.go:89] found id: "5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
I0407 13:30:47.052300 1095137 cri.go:89] found id: "ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
I0407 13:30:47.052305 1095137 cri.go:89] found id: ""
I0407 13:30:47.052313 1095137 logs.go:282] 2 containers: [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735]
I0407 13:30:47.052369 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.056223 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.059720 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0407 13:30:47.059794 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0407 13:30:47.130170 1095137 cri.go:89] found id: "051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
I0407 13:30:47.130191 1095137 cri.go:89] found id: "e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
I0407 13:30:47.130196 1095137 cri.go:89] found id: ""
I0407 13:30:47.130204 1095137 logs.go:282] 2 containers: [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce]
I0407 13:30:47.130261 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.134245 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.143189 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0407 13:30:47.143271 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0407 13:30:47.202603 1095137 cri.go:89] found id: "d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
I0407 13:30:47.202625 1095137 cri.go:89] found id: "c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
I0407 13:30:47.202630 1095137 cri.go:89] found id: ""
I0407 13:30:47.202637 1095137 logs.go:282] 2 containers: [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030]
I0407 13:30:47.202699 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.206762 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.210646 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0407 13:30:47.210745 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0407 13:30:47.284058 1095137 cri.go:89] found id: "74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
I0407 13:30:47.284131 1095137 cri.go:89] found id: "77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
I0407 13:30:47.284150 1095137 cri.go:89] found id: ""
I0407 13:30:47.284173 1095137 logs.go:282] 2 containers: [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b]
I0407 13:30:47.284264 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.290441 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.294067 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0407 13:30:47.294179 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0407 13:30:47.342560 1095137 cri.go:89] found id: "04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
I0407 13:30:47.342628 1095137 cri.go:89] found id: "2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
I0407 13:30:47.342646 1095137 cri.go:89] found id: ""
I0407 13:30:47.342669 1095137 logs.go:282] 2 containers: [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2]
I0407 13:30:47.342765 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.346752 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.351671 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0407 13:30:47.351794 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0407 13:30:47.412231 1095137 cri.go:89] found id: "e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
I0407 13:30:47.412307 1095137 cri.go:89] found id: "b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
I0407 13:30:47.412328 1095137 cri.go:89] found id: ""
I0407 13:30:47.412350 1095137 logs.go:282] 2 containers: [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb]
I0407 13:30:47.412437 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.416534 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.420684 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0407 13:30:47.420804 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0407 13:30:47.473376 1095137 cri.go:89] found id: "3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
I0407 13:30:47.473454 1095137 cri.go:89] found id: ""
I0407 13:30:47.473475 1095137 logs.go:282] 1 containers: [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625]
I0407 13:30:47.473560 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.477965 1095137 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0407 13:30:47.478087 1095137 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0407 13:30:47.526054 1095137 cri.go:89] found id: "2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
I0407 13:30:47.526129 1095137 cri.go:89] found id: "d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
I0407 13:30:47.526148 1095137 cri.go:89] found id: ""
I0407 13:30:47.526170 1095137 logs.go:282] 2 containers: [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849]
I0407 13:30:47.526254 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.531086 1095137 ssh_runner.go:195] Run: which crictl
I0407 13:30:47.534990 1095137 logs.go:123] Gathering logs for describe nodes ...
I0407 13:30:47.535062 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0407 13:30:47.736664 1095137 logs.go:123] Gathering logs for coredns [e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce] ...
I0407 13:30:47.736704 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce"
I0407 13:30:47.789228 1095137 logs.go:123] Gathering logs for kubernetes-dashboard [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625] ...
I0407 13:30:47.789262 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625"
I0407 13:30:47.866453 1095137 logs.go:123] Gathering logs for storage-provisioner [d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849] ...
I0407 13:30:47.866486 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849"
I0407 13:30:47.912587 1095137 logs.go:123] Gathering logs for container status ...
I0407 13:30:47.912618 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0407 13:30:47.991125 1095137 logs.go:123] Gathering logs for kubelet ...
I0407 13:30:47.991154 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0407 13:30:48.065207 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307390 667 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.065569 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.307927 667 reflector.go:138] object-"kube-system"/"kube-proxy-token-j6crq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-j6crq" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.065836 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308116 667 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.066068 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308292 667 reflector.go:138] object-"kube-system"/"storage-provisioner-token-nvxlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-nvxlj" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.066279 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308445 667 reflector.go:138] object-"default"/"default-token-znh7g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-znh7g" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.066499 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308590 667 reflector.go:138] object-"kube-system"/"kindnet-token-fxnc5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-fxnc5" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.066775 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:14 old-k8s-version-856421 kubelet[667]: E0407 13:25:14.308753 667 reflector.go:138] object-"kube-system"/"coredns-token-sjxkg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-sjxkg" is forbidden: User "system:node:old-k8s-version-856421" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-856421' and this object
W0407 13:30:48.072894 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:15 old-k8s-version-856421 kubelet[667]: E0407 13:25:15.094522 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.078579 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:16 old-k8s-version-856421 kubelet[667]: E0407 13:25:16.056738 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.082673 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:27 old-k8s-version-856421 kubelet[667]: E0407 13:25:27.890508 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.084370 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:40 old-k8s-version-856421 kubelet[667]: E0407 13:25:40.901804 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.084966 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:42 old-k8s-version-856421 kubelet[667]: E0407 13:25:42.190528 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.085629 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:43 old-k8s-version-856421 kubelet[667]: E0407 13:25:43.194128 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.086140 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:47 old-k8s-version-856421 kubelet[667]: E0407 13:25:47.208836 667 pod_workers.go:191] Error syncing pod ffa09209-8141-4692-8b43-e212485a4adb ("storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ffa09209-8141-4692-8b43-e212485a4adb)"
W0407 13:30:48.086480 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:49 old-k8s-version-856421 kubelet[667]: E0407 13:25:49.601173 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.089328 1095137 logs.go:138] Found kubelet problem: Apr 07 13:25:55 old-k8s-version-856421 kubelet[667]: E0407 13:25:55.894550 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.090071 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:02 old-k8s-version-856421 kubelet[667]: E0407 13:26:02.259589 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.090263 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:07 old-k8s-version-856421 kubelet[667]: E0407 13:26:07.882067 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.090596 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:09 old-k8s-version-856421 kubelet[667]: E0407 13:26:09.601119 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.090781 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:21 old-k8s-version-856421 kubelet[667]: E0407 13:26:21.882035 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.091367 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:23 old-k8s-version-856421 kubelet[667]: E0407 13:26:23.333979 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.091693 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:29 old-k8s-version-856421 kubelet[667]: E0407 13:26:29.601138 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.091878 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:33 old-k8s-version-856421 kubelet[667]: E0407 13:26:33.882060 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.092204 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:42 old-k8s-version-856421 kubelet[667]: E0407 13:26:42.882285 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.094653 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:46 old-k8s-version-856421 kubelet[667]: E0407 13:26:46.916880 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.094984 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:53 old-k8s-version-856421 kubelet[667]: E0407 13:26:53.881641 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.095172 1095137 logs.go:138] Found kubelet problem: Apr 07 13:26:58 old-k8s-version-856421 kubelet[667]: E0407 13:26:58.887165 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.095764 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:05 old-k8s-version-856421 kubelet[667]: E0407 13:27:05.451459 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.096091 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:09 old-k8s-version-856421 kubelet[667]: E0407 13:27:09.601083 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.096275 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:11 old-k8s-version-856421 kubelet[667]: E0407 13:27:11.882020 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.096603 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:22 old-k8s-version-856421 kubelet[667]: E0407 13:27:22.882870 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.096788 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:24 old-k8s-version-856421 kubelet[667]: E0407 13:27:24.883611 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.097167 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:33 old-k8s-version-856421 kubelet[667]: E0407 13:27:33.881645 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.097363 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:36 old-k8s-version-856421 kubelet[667]: E0407 13:27:36.883495 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.097692 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:46 old-k8s-version-856421 kubelet[667]: E0407 13:27:46.882237 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.097891 1095137 logs.go:138] Found kubelet problem: Apr 07 13:27:49 old-k8s-version-856421 kubelet[667]: E0407 13:27:49.882049 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.098077 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:00 old-k8s-version-856421 kubelet[667]: E0407 13:28:00.882106 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.098408 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:01 old-k8s-version-856421 kubelet[667]: E0407 13:28:01.881859 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.098735 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:13 old-k8s-version-856421 kubelet[667]: E0407 13:28:13.882356 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.101174 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:15 old-k8s-version-856421 kubelet[667]: E0407 13:28:15.895177 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:30:48.101500 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:24 old-k8s-version-856421 kubelet[667]: E0407 13:28:24.882233 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.101683 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:27 old-k8s-version-856421 kubelet[667]: E0407 13:28:27.882283 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.102314 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:36 old-k8s-version-856421 kubelet[667]: E0407 13:28:36.680281 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.102503 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:38 old-k8s-version-856421 kubelet[667]: E0407 13:28:38.882208 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.102831 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:39 old-k8s-version-856421 kubelet[667]: E0407 13:28:39.601171 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.103015 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:50 old-k8s-version-856421 kubelet[667]: E0407 13:28:50.882465 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.103343 1095137 logs.go:138] Found kubelet problem: Apr 07 13:28:52 old-k8s-version-856421 kubelet[667]: E0407 13:28:52.882220 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.103529 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:01 old-k8s-version-856421 kubelet[667]: E0407 13:29:01.882101 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.103856 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:04 old-k8s-version-856421 kubelet[667]: E0407 13:29:04.885771 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.104040 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:15 old-k8s-version-856421 kubelet[667]: E0407 13:29:15.882105 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.104366 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: E0407 13:29:19.881643 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.104552 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:28 old-k8s-version-856421 kubelet[667]: E0407 13:29:28.884253 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.105009 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: E0407 13:29:32.882068 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.105201 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:39 old-k8s-version-856421 kubelet[667]: E0407 13:29:39.883031 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.105541 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: E0407 13:29:47.882527 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.105739 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:50 old-k8s-version-856421 kubelet[667]: E0407 13:29:50.882583 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.106067 1095137 logs.go:138] Found kubelet problem: Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: E0407 13:29:59.882436 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.106251 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:01 old-k8s-version-856421 kubelet[667]: E0407 13:30:01.885267 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.106586 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.106770 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.107101 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.107285 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:48.107610 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:48.107794 1095137 logs.go:138] Found kubelet problem: Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:30:48.107807 1095137 logs.go:123] Gathering logs for kube-proxy [77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b] ...
I0407 13:30:48.107822 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b"
I0407 13:30:48.156575 1095137 logs.go:123] Gathering logs for kindnet [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b] ...
I0407 13:30:48.156606 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b"
I0407 13:30:48.232444 1095137 logs.go:123] Gathering logs for kindnet [b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb] ...
I0407 13:30:48.232472 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb"
I0407 13:30:48.305914 1095137 logs.go:123] Gathering logs for kube-apiserver [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af] ...
I0407 13:30:48.305993 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af"
I0407 13:30:48.379011 1095137 logs.go:123] Gathering logs for kube-apiserver [d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b] ...
I0407 13:30:48.379086 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b"
I0407 13:30:48.462552 1095137 logs.go:123] Gathering logs for etcd [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5] ...
I0407 13:30:48.462584 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5"
I0407 13:30:48.528785 1095137 logs.go:123] Gathering logs for kube-scheduler [c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030] ...
I0407 13:30:48.528974 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030"
I0407 13:30:48.589264 1095137 logs.go:123] Gathering logs for kube-controller-manager [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df] ...
I0407 13:30:48.589336 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df"
I0407 13:30:48.680565 1095137 logs.go:123] Gathering logs for kube-controller-manager [2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2] ...
I0407 13:30:48.680604 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2"
I0407 13:30:48.779599 1095137 logs.go:123] Gathering logs for storage-provisioner [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61] ...
I0407 13:30:48.779675 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61"
I0407 13:30:48.835392 1095137 logs.go:123] Gathering logs for containerd ...
I0407 13:30:48.835418 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0407 13:30:48.929349 1095137 logs.go:123] Gathering logs for dmesg ...
I0407 13:30:48.929382 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0407 13:30:48.954396 1095137 logs.go:123] Gathering logs for etcd [ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735] ...
I0407 13:30:48.954423 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735"
I0407 13:30:49.030928 1095137 logs.go:123] Gathering logs for coredns [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a] ...
I0407 13:30:49.031024 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a"
I0407 13:30:49.110624 1095137 logs.go:123] Gathering logs for kube-scheduler [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7] ...
I0407 13:30:49.110700 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7"
I0407 13:30:49.161794 1095137 logs.go:123] Gathering logs for kube-proxy [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088] ...
I0407 13:30:49.161888 1095137 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088"
I0407 13:30:49.226058 1095137 out.go:358] Setting ErrFile to fd 2...
I0407 13:30:49.226135 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0407 13:30:49.226216 1095137 out.go:270] X Problems detected in kubelet:
W0407 13:30:49.226386 1095137 out.go:270] Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:49.226426 1095137 out.go:270] Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:49.226481 1095137 out.go:270] Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:30:49.226514 1095137 out.go:270] Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
W0407 13:30:49.226557 1095137 out.go:270] Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:30:49.226602 1095137 out.go:358] Setting ErrFile to fd 2...
I0407 13:30:49.226640 1095137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:30:49.719980 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0407 13:30:49.720008 1107590 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0407 13:30:49.720101 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:49.743258 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:49.748723 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:49.782080 1107590 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0407 13:30:49.782100 1107590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0407 13:30:49.782164 1107590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688390
I0407 13:30:49.801166 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:49.814700 1107590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/20602-873072/.minikube/machines/embed-certs-688390/id_rsa Username:docker}
I0407 13:30:49.885594 1107590 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 13:30:49.953383 1107590 node_ready.go:35] waiting up to 6m0s for node "embed-certs-688390" to be "Ready" ...
I0407 13:30:50.017289 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0407 13:30:50.017374 1107590 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0407 13:30:50.141489 1107590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:30:50.148337 1107590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:30:50.152555 1107590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0407 13:30:50.152628 1107590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0407 13:30:50.189138 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0407 13:30:50.189223 1107590 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0407 13:30:50.235425 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0407 13:30:50.235496 1107590 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0407 13:30:50.323856 1107590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0407 13:30:50.323929 1107590 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0407 13:30:50.467515 1107590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:30:50.467589 1107590 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0407 13:30:50.478779 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0407 13:30:50.478851 1107590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0407 13:30:50.615259 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0407 13:30:50.615331 1107590 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0407 13:30:50.660938 1107590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:30:50.725537 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0407 13:30:50.725613 1107590 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0407 13:30:50.834421 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0407 13:30:50.834499 1107590 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0407 13:30:50.979246 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0407 13:30:50.979320 1107590 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0407 13:30:51.063504 1107590 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:30:51.063579 1107590 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0407 13:30:51.106347 1107590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:30:54.638260 1107590 node_ready.go:49] node "embed-certs-688390" has status "Ready":"True"
I0407 13:30:54.638347 1107590 node_ready.go:38] duration metric: took 4.684914251s for node "embed-certs-688390" to be "Ready" ...
I0407 13:30:54.638376 1107590 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 13:30:54.690506 1107590 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-855lf" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.722772 1107590 pod_ready.go:93] pod "coredns-668d6bf9bc-855lf" in "kube-system" namespace has status "Ready":"True"
I0407 13:30:54.722855 1107590 pod_ready.go:82] duration metric: took 32.270521ms for pod "coredns-668d6bf9bc-855lf" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.722883 1107590 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.737677 1107590 pod_ready.go:93] pod "etcd-embed-certs-688390" in "kube-system" namespace has status "Ready":"True"
I0407 13:30:54.737712 1107590 pod_ready.go:82] duration metric: took 14.8067ms for pod "etcd-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.737729 1107590 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.757757 1107590 pod_ready.go:93] pod "kube-apiserver-embed-certs-688390" in "kube-system" namespace has status "Ready":"True"
I0407 13:30:54.757782 1107590 pod_ready.go:82] duration metric: took 20.045109ms for pod "kube-apiserver-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.757795 1107590 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.799777 1107590 pod_ready.go:93] pod "kube-controller-manager-embed-certs-688390" in "kube-system" namespace has status "Ready":"True"
I0407 13:30:54.799852 1107590 pod_ready.go:82] duration metric: took 42.049171ms for pod "kube-controller-manager-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.799879 1107590 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-npv7l" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.854334 1107590 pod_ready.go:93] pod "kube-proxy-npv7l" in "kube-system" namespace has status "Ready":"True"
I0407 13:30:54.854417 1107590 pod_ready.go:82] duration metric: took 54.515522ms for pod "kube-proxy-npv7l" in "kube-system" namespace to be "Ready" ...
I0407 13:30:54.854443 1107590 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-688390" in "kube-system" namespace to be "Ready" ...
I0407 13:30:55.122265 1107590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.980739539s)
I0407 13:30:56.879032 1107590 pod_ready.go:103] pod "kube-scheduler-embed-certs-688390" in "kube-system" namespace has status "Ready":"False"
I0407 13:30:58.362134 1107590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.21376444s)
I0407 13:30:58.556458 1107590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.895434082s)
I0407 13:30:58.556501 1107590 addons.go:479] Verifying addon metrics-server=true in "embed-certs-688390"
I0407 13:30:58.873694 1107590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.767253916s)
I0407 13:30:58.877112 1107590 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-688390 addons enable metrics-server
I0407 13:30:58.879958 1107590 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0407 13:30:59.228033 1095137 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0407 13:30:59.239760 1095137 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0407 13:30:59.244653 1095137 out.go:201]
W0407 13:30:59.247535 1095137 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0407 13:30:59.247763 1095137 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0407 13:30:59.247829 1095137 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0407 13:30:59.247882 1095137 out.go:270] *
W0407 13:30:59.248818 1095137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0407 13:30:59.252447 1095137 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
1df7625042191 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 481fae8da9e40 dashboard-metrics-scraper-8d5bb5db8-52tjd
2c994f7c46244 ba04bb24b9575 4 minutes ago Running storage-provisioner 2 d900ee90a62f8 storage-provisioner
3e668dbcb4dd3 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 60ebbca337412 kubernetes-dashboard-cd95d586-nrhfv
051642364b305 db91994f4ee8f 5 minutes ago Running coredns 1 c32a143427a1c coredns-74ff55c5b-gtrrb
74af9023b4fde 25a5233254979 5 minutes ago Running kube-proxy 1 0ba7816b7eb92 kube-proxy-j5fsn
d05c978cfa5a3 ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 d900ee90a62f8 storage-provisioner
e4568189822d4 ee75e27fff91c 5 minutes ago Running kindnet-cni 1 bb5f824017538 kindnet-8q8nx
417089f9d6f6e 1611cd07b61d5 5 minutes ago Running busybox 1 108d027757958 busybox
04c3bb1dfe7c8 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 96c8ae41774db kube-controller-manager-old-k8s-version-856421
5864d99cdd47d 05b738aa1bc63 5 minutes ago Running etcd 1 62cc726a5255d etcd-old-k8s-version-856421
a1ef4f8376e9f 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 ea7a7c3c55c48 kube-apiserver-old-k8s-version-856421
d6760ad08162f e7605f88f17d6 5 minutes ago Running kube-scheduler 1 8d95fd0725913 kube-scheduler-old-k8s-version-856421
e32207a71e4f1 1611cd07b61d5 6 minutes ago Exited busybox 0 65480f8847710 busybox
e8606d211bc70 db91994f4ee8f 8 minutes ago Exited coredns 0 42405f984c851 coredns-74ff55c5b-gtrrb
b2ada56c528ba ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 bd07d66a8baca kindnet-8q8nx
77f2619b2d1aa 25a5233254979 8 minutes ago Exited kube-proxy 0 e06826c1a5d6d kube-proxy-j5fsn
ad3658b16b264 05b738aa1bc63 8 minutes ago Exited etcd 0 bf93bea3f7b03 etcd-old-k8s-version-856421
c6f6d481b0f4c e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 acfaf5cdb4249 kube-scheduler-old-k8s-version-856421
d89f223f22b86 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 8ed927c15dd98 kube-apiserver-old-k8s-version-856421
2b349ae2ec417 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 d54344d5cb3a1 kube-controller-manager-old-k8s-version-856421
==> containerd <==
Apr 07 13:26:46 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:26:46.916342700Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 07 13:27:04 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:04.884652930Z" level=info msg="CreateContainer within sandbox \"481fae8da9e40b0c6b9d7ed57ce568c92b0a7de5077cc9f5b6527a7b29ea0172\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Apr 07 13:27:04 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:04.906988950Z" level=info msg="CreateContainer within sandbox \"481fae8da9e40b0c6b9d7ed57ce568c92b0a7de5077cc9f5b6527a7b29ea0172\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\""
Apr 07 13:27:04 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:04.912783815Z" level=info msg="StartContainer for \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\""
Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.005563627Z" level=info msg="StartContainer for \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\" returns successfully"
Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.005627595Z" level=info msg="received exit event container_id:\"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\" id:\"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\" pid:3047 exit_status:255 exited_at:{seconds:1744032425 nanos:4710043}"
Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.036080647Z" level=info msg="shim disconnected" id=71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4 namespace=k8s.io
Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.036121812Z" level=warning msg="cleaning up after shim disconnected" id=71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4 namespace=k8s.io
Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.036162797Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.453219446Z" level=info msg="RemoveContainer for \"223e8b72be32919eadd5acb820b4dd7b7c1450a6869493c759e2c9e8529c8d75\""
Apr 07 13:27:05 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:27:05.460589818Z" level=info msg="RemoveContainer for \"223e8b72be32919eadd5acb820b4dd7b7c1450a6869493c759e2c9e8529c8d75\" returns successfully"
Apr 07 13:28:15 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:15.882656938Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:28:15 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:15.892524829Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Apr 07 13:28:15 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:15.894628006Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:28:15 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:15.894648896Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.884129413Z" level=info msg="CreateContainer within sandbox \"481fae8da9e40b0c6b9d7ed57ce568c92b0a7de5077cc9f5b6527a7b29ea0172\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.903590328Z" level=info msg="CreateContainer within sandbox \"481fae8da9e40b0c6b9d7ed57ce568c92b0a7de5077cc9f5b6527a7b29ea0172\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\""
Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.904500191Z" level=info msg="StartContainer for \"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\""
Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.981178739Z" level=info msg="StartContainer for \"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\" returns successfully"
Apr 07 13:28:35 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:35.983860402Z" level=info msg="received exit event container_id:\"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\" id:\"1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34\" pid:3304 exit_status:255 exited_at:{seconds:1744032515 nanos:983628891}"
Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.025492793Z" level=info msg="shim disconnected" id=1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34 namespace=k8s.io
Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.025536527Z" level=warning msg="cleaning up after shim disconnected" id=1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34 namespace=k8s.io
Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.025695447Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.685305910Z" level=info msg="RemoveContainer for \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\""
Apr 07 13:28:36 old-k8s-version-856421 containerd[570]: time="2025-04-07T13:28:36.698770251Z" level=info msg="RemoveContainer for \"71f808f85b309809247cc3e4c9e9c18e41d6b6770cd4b0eb5d57f89f5045add4\" returns successfully"
==> coredns [051642364b305742215427b11bf59d630eb773d7d2d8489872ccb8bafb97a33a] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:60252 - 3125 "HINFO IN 8244431149089733818.7730825843377024063. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013087223s
==> coredns [e8606d211bc70838dc98631de3d337f0689089d54c4dd0963c66fed159c72bce] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:56619 - 25099 "HINFO IN 8395759530412048856.6897807622872564169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030681142s
==> describe nodes <==
Name: old-k8s-version-856421
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-856421
kubernetes.io/os=linux
minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
minikube.k8s.io/name=old-k8s-version-856421
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_07T13_22_15_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 07 Apr 2025 13:22:11 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-856421
AcquireTime: <unset>
RenewTime: Mon, 07 Apr 2025 13:30:56 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 07 Apr 2025 13:26:04 +0000 Mon, 07 Apr 2025 13:22:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 07 Apr 2025 13:26:04 +0000 Mon, 07 Apr 2025 13:22:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 07 Apr 2025 13:26:04 +0000 Mon, 07 Apr 2025 13:22:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Apr 2025 13:26:04 +0000 Mon, 07 Apr 2025 13:22:29 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-856421
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: 4795b06879444d91806fbc5506b71cbf
System UUID: e1be3290-981d-4c45-832d-195b60a8715e
Boot ID: 23ff30ac-10fb-424b-be6b-3b05e144d397
Kernel Version: 5.15.0-1081-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.27
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m37s
kube-system coredns-74ff55c5b-gtrrb 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m32s
kube-system etcd-old-k8s-version-856421 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m38s
kube-system kindnet-8q8nx 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m32s
kube-system kube-apiserver-old-k8s-version-856421 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m38s
kube-system kube-controller-manager-old-k8s-version-856421 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m38s
kube-system kube-proxy-j5fsn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m32s
kube-system kube-scheduler-old-k8s-version-856421 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m38s
kube-system metrics-server-9975d5f86-tkvrz 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m27s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m30s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-52tjd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m30s
kubernetes-dashboard kubernetes-dashboard-cd95d586-nrhfv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m30s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m58s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m58s (x5 over 8m58s) kubelet Node old-k8s-version-856421 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m58s (x5 over 8m58s) kubelet Node old-k8s-version-856421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m58s (x5 over 8m58s) kubelet Node old-k8s-version-856421 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m58s kubelet Updated Node Allocatable limit across pods
Normal Starting 8m39s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m39s kubelet Node old-k8s-version-856421 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m39s kubelet Node old-k8s-version-856421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m39s kubelet Node old-k8s-version-856421 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m39s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m32s kubelet Node old-k8s-version-856421 status is now: NodeReady
Normal Starting 8m30s kube-proxy Starting kube-proxy.
Normal Starting 5m59s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m59s (x8 over 5m59s) kubelet Node old-k8s-version-856421 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m59s (x7 over 5m59s) kubelet Node old-k8s-version-856421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m59s (x8 over 5m59s) kubelet Node old-k8s-version-856421 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m59s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m44s kube-proxy Starting kube-proxy.
==> dmesg <==
[Apr 7 12:09] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
==> etcd [5864d99cdd47d055aee6b1526bb1c6a1ee5862850613e2d63e0d93e141c7f4d5] <==
2025-04-07 13:26:53.507201 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:27:03.507187 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:27:13.507293 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:27:23.507328 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:27:33.507332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:27:43.507522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:27:53.507143 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:28:03.507327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:28:13.507145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:28:23.507326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:28:33.507385 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:28:43.507079 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:28:53.507145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:29:03.507315 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:29:13.507097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:29:23.507409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:29:33.507200 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:29:43.507187 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:29:53.507302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:30:03.507177 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:30:13.507347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:30:23.507588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:30:33.507113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:30:43.507206 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:30:53.507207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [ad3658b16b26420a5c8d95f062de8e38e1c9d8d5235a77bf2a50e7228aabb735] <==
raft2025/04/07 13:22:04 INFO: ea7e25599daad906 became candidate at term 2
raft2025/04/07 13:22:04 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/04/07 13:22:04 INFO: ea7e25599daad906 became leader at term 2
raft2025/04/07 13:22:04 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-04-07 13:22:04.648270 I | etcdserver: published {Name:old-k8s-version-856421 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-04-07 13:22:04.648524 I | etcdserver: setting up the initial cluster version to 3.4
2025-04-07 13:22:04.648597 I | embed: ready to serve client requests
2025-04-07 13:22:04.649943 I | embed: serving client requests on 192.168.76.2:2379
2025-04-07 13:22:04.650065 I | embed: ready to serve client requests
2025-04-07 13:22:04.651141 I | embed: serving client requests on 127.0.0.1:2379
2025-04-07 13:22:04.710980 N | etcdserver/membership: set the initial cluster version to 3.4
2025-04-07 13:22:04.711115 I | etcdserver/api: enabled capabilities for version 3.4
2025-04-07 13:22:32.926535 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:22:41.923526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:22:51.923483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:23:01.923676 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:23:11.923483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:23:21.925490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:23:31.923844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:23:41.923649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:23:51.925212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:24:01.923755 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:24:11.923561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:24:21.923516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:24:31.923549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
13:31:01 up 5:13, 0 users, load average: 5.07, 2.85, 2.83
Linux old-k8s-version-856421 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [b2ada56c528ba0083dc91efb8ea35b5b75cc614fa96a6cd93819b45a72ee05fb] <==
I0407 13:22:33.229878 1 controller.go:401] Syncing nftables rules
I0407 13:22:43.052518 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:22:43.052597 1 main.go:301] handling current node
I0407 13:22:53.043506 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:22:53.043732 1 main.go:301] handling current node
I0407 13:23:03.052526 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:23:03.052648 1 main.go:301] handling current node
I0407 13:23:13.051860 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:23:13.051893 1 main.go:301] handling current node
I0407 13:23:23.044874 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:23:23.044909 1 main.go:301] handling current node
I0407 13:23:33.044055 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:23:33.044100 1 main.go:301] handling current node
I0407 13:23:43.049760 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:23:43.049804 1 main.go:301] handling current node
I0407 13:23:53.045909 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:23:53.045945 1 main.go:301] handling current node
I0407 13:24:03.047545 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:24:03.047581 1 main.go:301] handling current node
I0407 13:24:13.051189 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:24:13.051224 1 main.go:301] handling current node
I0407 13:24:23.043066 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:24:23.043101 1 main.go:301] handling current node
I0407 13:24:33.043076 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:24:33.043114 1 main.go:301] handling current node
==> kindnet [e4568189822d430c58ad9f0a391ab15cbb27395e1408248c2c03f19d5dc9150b] <==
I0407 13:28:57.156599 1 main.go:301] handling current node
I0407 13:29:07.154068 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:29:07.154104 1 main.go:301] handling current node
I0407 13:29:17.147542 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:29:17.147610 1 main.go:301] handling current node
I0407 13:29:27.153958 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:29:27.154067 1 main.go:301] handling current node
I0407 13:29:37.153814 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:29:37.153854 1 main.go:301] handling current node
I0407 13:29:47.147760 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:29:47.147797 1 main.go:301] handling current node
I0407 13:29:57.154535 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:29:57.154575 1 main.go:301] handling current node
I0407 13:30:07.152842 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:30:07.152880 1 main.go:301] handling current node
I0407 13:30:17.147951 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:30:17.148061 1 main.go:301] handling current node
I0407 13:30:27.153805 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:30:27.153843 1 main.go:301] handling current node
I0407 13:30:37.153790 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:30:37.153827 1 main.go:301] handling current node
I0407 13:30:47.147599 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:30:47.147636 1 main.go:301] handling current node
I0407 13:30:57.154584 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0407 13:30:57.154621 1 main.go:301] handling current node
==> kube-apiserver [a1ef4f8376e9fc950587e9d4daa95427d19c48e4606813b7ce87102e58fd46af] <==
I0407 13:27:31.767346 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:27:31.767356 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:28:10.812872 1 client.go:360] parsed scheme: "passthrough"
I0407 13:28:10.813105 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:28:10.813193 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0407 13:28:17.207074 1 handler_proxy.go:102] no RequestInfo found in the context
E0407 13:28:17.207329 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0407 13:28:17.207347 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0407 13:28:42.326737 1 client.go:360] parsed scheme: "passthrough"
I0407 13:28:42.326790 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:28:42.326799 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:29:12.498697 1 client.go:360] parsed scheme: "passthrough"
I0407 13:29:12.498741 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:29:12.498752 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:29:49.514099 1 client.go:360] parsed scheme: "passthrough"
I0407 13:29:49.514153 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:29:49.514162 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0407 13:30:15.312399 1 handler_proxy.go:102] no RequestInfo found in the context
E0407 13:30:15.312609 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0407 13:30:15.312625 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0407 13:30:27.022798 1 client.go:360] parsed scheme: "passthrough"
I0407 13:30:27.022845 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:30:27.022853 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [d89f223f22b8606c12ec0e778fa21d2abe90349e214332d4a1fa32b45f514d9b] <==
I0407 13:22:11.828581 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0407 13:22:11.828605 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0407 13:22:12.381249 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0407 13:22:12.433164 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0407 13:22:12.506657 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0407 13:22:12.507971 1 controller.go:606] quota admission added evaluator for: endpoints
I0407 13:22:12.515021 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0407 13:22:12.833391 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0407 13:22:13.440319 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0407 13:22:14.271049 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0407 13:22:14.322927 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0407 13:22:29.363288 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0407 13:22:29.451983 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0407 13:22:42.549185 1 client.go:360] parsed scheme: "passthrough"
I0407 13:22:42.549235 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:22:42.549244 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:23:13.659861 1 client.go:360] parsed scheme: "passthrough"
I0407 13:23:13.660064 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:23:13.660158 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:23:50.556353 1 client.go:360] parsed scheme: "passthrough"
I0407 13:23:50.556401 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:23:50.556409 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:24:30.627016 1 client.go:360] parsed scheme: "passthrough"
I0407 13:24:30.627061 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:24:30.627070 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [04c3bb1dfe7c8d458ce5ec56757e932f8a599e0d467221dd4b85ef6a3fefa6df] <==
W0407 13:26:37.427371 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:27:03.457165 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:27:09.078011 1 request.go:655] Throttling request took 1.0483435s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1?timeout=32s
W0407 13:27:09.929506 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:27:33.959144 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:27:41.580120 1 request.go:655] Throttling request took 1.047299102s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
W0407 13:27:42.431730 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:28:04.460878 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:28:14.082286 1 request.go:655] Throttling request took 1.048367247s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
W0407 13:28:14.933999 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:28:34.962737 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:28:46.584632 1 request.go:655] Throttling request took 1.048371477s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
W0407 13:28:47.436161 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:29:05.464598 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:29:19.086503 1 request.go:655] Throttling request took 1.048136871s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0407 13:29:19.938099 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:29:35.968010 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:29:51.588654 1 request.go:655] Throttling request took 1.048422407s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
W0407 13:29:52.440210 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:30:06.470097 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:30:24.090645 1 request.go:655] Throttling request took 1.048351001s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
W0407 13:30:24.942143 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:30:36.972212 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:30:56.592508 1 request.go:655] Throttling request took 1.04815208s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0407 13:30:57.444165 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [2b349ae2ec417a9d000e61e8c66afdcccb2d202d992eec33826df20e410f87a2] <==
I0407 13:22:29.417785 1 shared_informer.go:247] Caches are synced for service account
I0407 13:22:29.421628 1 shared_informer.go:247] Caches are synced for disruption
I0407 13:22:29.421651 1 disruption.go:339] Sending events to api server.
I0407 13:22:29.428330 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-kfvbc"
I0407 13:22:29.438507 1 shared_informer.go:247] Caches are synced for daemon sets
I0407 13:22:29.448040 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-gtrrb"
I0407 13:22:29.486374 1 shared_informer.go:247] Caches are synced for ReplicationController
I0407 13:22:29.492899 1 shared_informer.go:247] Caches are synced for resource quota
I0407 13:22:29.496575 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j5fsn"
I0407 13:22:29.512522 1 shared_informer.go:247] Caches are synced for resource quota
I0407 13:22:29.529201 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8q8nx"
E0407 13:22:29.600347 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"300f6abd-8358-438e-ac2d-b30583f29332", ResourceVersion:"281", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63879628934, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40012b09e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40012b0a00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x40012b0a20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40014b8a40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40012b0
a40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40012b0a60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40012b0aa0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400135f920), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f09998), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400017c930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000ef08)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f099e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0407 13:22:29.603156 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0407 13:22:29.609186 1 shared_informer.go:247] Caches are synced for attach detach
E0407 13:22:29.632318 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0407 13:22:29.648318 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0407 13:22:29.759951 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0407 13:22:30.061251 1 shared_informer.go:247] Caches are synced for garbage collector
I0407 13:22:30.104396 1 shared_informer.go:247] Caches are synced for garbage collector
I0407 13:22:30.104423 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0407 13:22:30.977654 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0407 13:22:31.079373 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-kfvbc"
I0407 13:22:34.339759 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0407 13:24:33.248179 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0407 13:24:33.318012 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [74af9023b4fde2bd349b21c976719ba85dda1ea03171badab844478b8cbf2088] <==
I0407 13:25:16.994516 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0407 13:25:16.994916 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0407 13:25:17.025961 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0407 13:25:17.026513 1 server_others.go:185] Using iptables Proxier.
I0407 13:25:17.026906 1 server.go:650] Version: v1.20.0
I0407 13:25:17.029508 1 config.go:315] Starting service config controller
I0407 13:25:17.029630 1 shared_informer.go:240] Waiting for caches to sync for service config
I0407 13:25:17.033242 1 config.go:224] Starting endpoint slice config controller
I0407 13:25:17.034908 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0407 13:25:17.134535 1 shared_informer.go:247] Caches are synced for service config
I0407 13:25:17.135192 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [77f2619b2d1aad99f71633fed74393b3b19f570432847d24bea319ce3f3cc54b] <==
I0407 13:22:31.411656 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0407 13:22:31.411763 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0407 13:22:31.445345 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0407 13:22:31.445438 1 server_others.go:185] Using iptables Proxier.
I0407 13:22:31.445658 1 server.go:650] Version: v1.20.0
I0407 13:22:31.447571 1 config.go:315] Starting service config controller
I0407 13:22:31.447586 1 shared_informer.go:240] Waiting for caches to sync for service config
I0407 13:22:31.448041 1 config.go:224] Starting endpoint slice config controller
I0407 13:22:31.448047 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0407 13:22:31.548108 1 shared_informer.go:247] Caches are synced for service config
I0407 13:22:31.548453 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [c6f6d481b0f4c36b1a380741fe1a1dc69435fb844001d3a69feaa1fddcdd4030] <==
W0407 13:22:11.058570 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0407 13:22:11.058598 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0407 13:22:11.059053 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0407 13:22:11.126593 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0407 13:22:11.126806 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 13:22:11.126853 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 13:22:11.126888 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0407 13:22:11.136255 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0407 13:22:11.136582 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0407 13:22:11.144442 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0407 13:22:11.144707 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0407 13:22:11.145976 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0407 13:22:11.146240 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0407 13:22:11.147946 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0407 13:22:11.148268 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0407 13:22:11.155354 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0407 13:22:11.155675 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0407 13:22:11.155912 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0407 13:22:11.156240 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0407 13:22:12.068209 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0407 13:22:12.078072 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0407 13:22:12.102133 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0407 13:22:12.131491 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0407 13:22:12.208064 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0407 13:22:12.526927 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [d6760ad08162f8ec381c7feef6ca753faf89ef2bb6b6aaf4ad3420559e5f73e7] <==
I0407 13:25:08.117773 1 serving.go:331] Generated self-signed cert in-memory
W0407 13:25:14.181318 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0407 13:25:14.182983 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0407 13:25:14.183067 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0407 13:25:14.183145 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0407 13:25:14.470287 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0407 13:25:14.472013 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 13:25:14.472033 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 13:25:14.472048 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0407 13:25:14.573899 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 07 13:29:15 old-k8s-version-856421 kubelet[667]: E0407 13:29:15.882105 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: I0407 13:29:19.881302 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
Apr 07 13:29:19 old-k8s-version-856421 kubelet[667]: E0407 13:29:19.881643 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:29:28 old-k8s-version-856421 kubelet[667]: E0407 13:29:28.884253 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: I0407 13:29:32.881668 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
Apr 07 13:29:32 old-k8s-version-856421 kubelet[667]: E0407 13:29:32.882068 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:29:39 old-k8s-version-856421 kubelet[667]: E0407 13:29:39.883031 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: I0407 13:29:47.881357 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
Apr 07 13:29:47 old-k8s-version-856421 kubelet[667]: E0407 13:29:47.882527 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:29:50 old-k8s-version-856421 kubelet[667]: E0407 13:29:50.882583 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: I0407 13:29:59.881312 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
Apr 07 13:29:59 old-k8s-version-856421 kubelet[667]: E0407 13:29:59.882436 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:01 old-k8s-version-856421 kubelet[667]: E0407 13:30:01.885267 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: I0407 13:30:10.881624 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
Apr 07 13:30:10 old-k8s-version-856421 kubelet[667]: E0407 13:30:10.882871 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:15 old-k8s-version-856421 kubelet[667]: E0407 13:30:15.882153 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: I0407 13:30:22.881884 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
Apr 07 13:30:22 old-k8s-version-856421 kubelet[667]: E0407 13:30:22.882311 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:28 old-k8s-version-856421 kubelet[667]: E0407 13:30:28.882142 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: I0407 13:30:35.885009 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
Apr 07 13:30:35 old-k8s-version-856421 kubelet[667]: E0407 13:30:35.885355 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:41 old-k8s-version-856421 kubelet[667]: E0407 13:30:41.882269 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:30:49 old-k8s-version-856421 kubelet[667]: I0407 13:30:49.881277 667 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1df7625042191f4e3be7c9854e1ca36378248104241854b01edb494d6c420c34
Apr 07 13:30:49 old-k8s-version-856421 kubelet[667]: E0407 13:30:49.882356 667 pod_workers.go:191] Error syncing pod 6d80702a-5159-408f-b602-141dd80c115c ("dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-52tjd_kubernetes-dashboard(6d80702a-5159-408f-b602-141dd80c115c)"
Apr 07 13:30:53 old-k8s-version-856421 kubelet[667]: E0407 13:30:53.882047 667 pod_workers.go:191] Error syncing pod beba250e-b7d6-48c9-9538-9052a30383ec ("metrics-server-9975d5f86-tkvrz_kube-system(beba250e-b7d6-48c9-9538-9052a30383ec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [3e668dbcb4dd3fef6e58de6d11f5522dbfad76947bfe7bab5fda3d0228b2f625] <==
2025/04/07 13:25:36 Starting overwatch
2025/04/07 13:25:36 Using namespace: kubernetes-dashboard
2025/04/07 13:25:36 Using in-cluster config to connect to apiserver
2025/04/07 13:25:36 Using secret token for csrf signing
2025/04/07 13:25:36 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/04/07 13:25:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/04/07 13:25:36 Successful initial request to the apiserver, version: v1.20.0
2025/04/07 13:25:36 Generating JWE encryption key
2025/04/07 13:25:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/04/07 13:25:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/04/07 13:25:37 Initializing JWE encryption key from synchronized object
2025/04/07 13:25:37 Creating in-cluster Sidecar client
2025/04/07 13:25:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:25:37 Serving insecurely on HTTP port: 9090
2025/04/07 13:26:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:26:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:27:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:27:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:28:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:28:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:29:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:29:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:30:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:30:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [2c994f7c46244b75a5d9fce69b3c3f15c9a2ce0a534a675952a6506d4c18ba61] <==
I0407 13:26:02.106687 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0407 13:26:02.192476 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0407 13:26:02.193247 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0407 13:26:19.680789 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0407 13:26:19.680959 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-856421_f959ad1a-c52c-4bcb-af4f-c159208f638a!
I0407 13:26:19.684640 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be93a60a-1bee-444f-ada8-fa2850a45a39", APIVersion:"v1", ResourceVersion:"862", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-856421_f959ad1a-c52c-4bcb-af4f-c159208f638a became leader
I0407 13:26:19.784206 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-856421_f959ad1a-c52c-4bcb-af4f-c159208f638a!
==> storage-provisioner [d05c978cfa5a3d8c7465b05f0087ddff9b59b6291d3e17156779dbb770748849] <==
I0407 13:25:16.629182 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0407 13:25:46.637924 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-856421 -n old-k8s-version-856421
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-856421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-tkvrz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-856421 describe pod metrics-server-9975d5f86-tkvrz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-856421 describe pod metrics-server-9975d5f86-tkvrz: exit status 1 (122.11588ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-tkvrz" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-856421 describe pod metrics-server-9975d5f86-tkvrz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.66s)